Sep
24
2018
--

Microsoft Azure gets new high-performance storage options

Microsoft Azure is getting a number of new storage options today that mostly focus on use cases where disk performance matters.

The first of these is Azure Ultra SSD Managed Disks, which are now in public preview. Microsoft says that these drives will offer “sub-millisecond latency,” which unsurprisingly makes them ideal for workloads where latency matters.

Earlier this year, Microsoft launched its Premium and Standard SSD Managed Disks offerings for Azure into preview. These ‘ultra’ SSDs represent the next tier up from the Premium SSDs with even lower latency and higher throughput. They’ll offer 160,000 IOPS per second will less than a millisecond of read/write latency. These disks will come in sizes ranging from 4GB to 64TB.

And talking about Standard SSD Managed Disks, this service is now generally available after only three months in preview. To top things off, all of Azure’s storage tiers (Premium and Standard SSD, as well as Standard HDD) now offer 8, 16 and 32 TB storage capacity.

Also new today is Azure Premium files, which is now in preview. This, too, is an SSD-based service. Azure Files itself isn’t new, though. It offers users access to cloud storage using the standard SMB protocol. This new premium offering promises higher throughput and lower latency for these kind of SMB operations.

more Microsoft Ignite 2018 coverage

Sep
24
2018
--

Microsoft hopes enterprises will want to use Cortana

In a world dominated by Alexa and the Google Assistant, Cortana suffers the fate of a perfectly good alternative that nobody uses and everybody forgets about. But Microsoft wouldn’t be Microsoft if it just gave up on its investment in this space, so it’s now launching the Cortana Skills Kit for Enterprise to see if that’s a niche where Cortana can succeed.

This new kit is an end-to-end solution for enterprises that want to build their own skills and agents. Of course, they could have done this before using the existing developer tools. This kit isn’t all that different from those, after all. Microsoft notes that it is designed for deployment inside an organization and represents a new platform for them to build these experiences.

The Skills Kit platform is based on the Microsoft Bot Framework and the Azure Cognitive Services Language Understanding feature.

Overall, this is probably not a bad bet on Microsoft’s part. I can see how some enterprises would want to build their own skills for their employees and customers to access internal data, for example, or to complete routine tasks.

For now, this tool is only available in private preview. No word on when we can expect a wider launch.

more Microsoft Ignite 2018 coverage

Sep
24
2018
--

Microsoft updates its planet-scale Cosmos DB database service

Cosmos DB is undoubtedly one of the most interesting products in Microsoft’s Azure portfolio. It’s a fully managed, globally distributed multi-model database that offers throughput guarantees, a number of different consistency models and high read and write availability guarantees. Now that’s a mouthful, but basically, it means that developers can build a truly global product, write database updates to Cosmos DB and rest assured that every other user across the world will see those updates within 20 milliseconds or so. And to write their applications, they can pretend that Cosmos DB is a SQL- or MongoDB-compatible database, for example.

CosmosDB officially launched in May 2017, though in many ways it’s an evolution of Microsoft’s existing Document DB product, which was far less flexible. Today, a lot of Microsoft’s own products run on CosmosDB, including the Azure Portal itself, as well as Skype, Office 365 and Xbox.

Today, Microsoft is extending Cosmos DB with the launch of its multi-master replication feature into general availability, as well as support for the Cassandra API, giving developers yet another option to bring existing products to CosmosDB, which in this case are those written for Cassandra.

Microsoft now also promises 99.999 percent read and write availability. Previously, it’s read availability promise was 99.99 percent. And while that may not seem like a big difference, it does show that after more of a year of operating Cosmos DB with customers, Microsoft now feels more confident that it’s a highly stable system. In addition, Microsoft is also updating its write latency SLA and now promises less than 10 milliseconds at the 99th percentile.

“If you have write-heavy workloads, spanning multiple geos, and you need this near real-time ingest of your data, this becomes extremely attractive for IoT, web, mobile gaming scenarios,” Microsoft CosmosDB architect and product manager Rimma Nehme told me. She also stressed that she believes Microsoft’s SLA definitions are far more stringent than those of its competitors.

The highlight of the update, though, is multi-master replication. “We believe that we’re really the first operational database out there in the marketplace that runs on such a scale and will enable globally scalable multi-master available to the customers,” Nehme said. “The underlying protocols were designed to be multi-master from the very beginning.”

Why is this such a big deal? With this, developers can designate every region they run Cosmos DB in as a master in its own right, making for a far more scalable system in terms of being able to write updates to the database. There’s no need to first write to a single master node, which may be far away, and then have that node push the update to every other region. Instead, applications can write to the nearest region, and Cosmos DB handles everything from there. If there are conflicts, the user can decide how those should be resolved based on their own needs.

Nehme noted that all of this still plays well with CosmosDB’s existing set of consistency models. If you don’t spend your days thinking about database consistency models, then this may sound arcane, but there’s a whole area of computer science that focuses on little else but how to best handle a scenario where two users virtually simultaneously try to change the same cell in a distributed database.

Unlike other databases, Cosmos DB allows for a variety of consistency models, ranging from strong to eventual, with three intermediary models. And it actually turns out that most CosmosDB users opt for one of those intermediary models.

Interestingly, when I talked to Leslie Lamport, the Turing award winner who developed some of the fundamental concepts behind these consistency models (and the popular LaTeX document preparation system), he wasn’t all that sure that the developers are making the right choice. “I don’t know whether they really understand the consequences or whether their customers are going to be in for some surprises,” he told me. “If they’re smart, they are getting just the amount of consistency that they need. If they’re not smart, it means they’re trying to gain some efficiency and their users might not be happy about that.” He noted that when you give up strong consistency, it’s often hard to understand what exactly is happening.

But strong consistency comes with its drawbacks, too, which leads to higher latency. “For strong consistency there are a certain number of roundtrip message delays that you can’t avoid,” Lamport noted.

The CosmosDB team isn’t just building on some of the fundamental work Lamport did around databases, but it’s also making extensive use of TLA+, the formal specification language Lamport developed in the late 90s. Microsoft, as well as Amazon and others, are now training their engineers to use TLA+ to describe their algorithms mathematically before they implement them in whatever language they prefer.

“Because [CosmosDB is] a massively complicated system, there is no way to ensure the correctness of it because we are humans, and trying to hold all of these failure conditions and the complexity in any one person’s — one engineer’s — head, is impossible,” Microsoft Technical Follow Dharma Shukla noted. “TLA+ is huge in terms of getting the design done correctly, specified and validated using the TLA+ tools even before a single line of code is written. You cover all of those hundreds of thousands of edge cases that can potentially lead to data loss or availability loss, or race conditions that you had never thought about, but that two or three years ago after you have deployed the code can lead to some data corruption for customers. That would be disastrous.”

“Programming languages have a very precise goal, which is to be able to write code. And the thing that I’ve been saying over and over again is that programming is more than just coding,” Lamport added. “It’s not just coding, that’s the easy part of programming. The hard part of programming is getting the algorithms right.”

Lamport also noted that he deliberately chose to make TLA+ look like mathematics, not like another programming languages. “It really forces people to think above the code level,” Lamport noted and added that engineers often tell him that it changes the way they think.

As for those companies that don’t use TLA+ or a similar methodology, Lamport says he’s worried. “I’m really comforted that [Microsoft] is using TLA+ because I don’t see how anyone could do it without using that kind of mathematical thinking — and I worry about what the other systems that we wind up using built by other organizations — I worry about how reliable they are.”

more Microsoft Ignite 2018 coverage

Sep
24
2018
--

Microsoft wants to put your data in a box

AWS has its Snowball (and Snowmobile truck), Google Cloud has its data transfer appliance and Microsoft has its Azure Data Box. All of these are physical appliances that allow enterprises to ship lots of data to the cloud by uploading it into these machines and then shipping them to the cloud. Microsoft’s Azure Data Box launched into preview about a year ago and today, the company is announcing a number of updates and adding a few new boxes, too.

First of all, the standard 50-pound, 100-terabyte Data Box is now generally available. If you’ve got a lot of data to transfer to the cloud — or maybe collect a lot of offline data — then FedEx will happily pick this one up and Microsoft will upload the data to Azure and charge you for your storage allotment.

If you’ve got a lot more data, though, then Microsoft now also offers the Azure Data Box Heavy. This new box, which is now in preview, can hold up to one petabyte of data. Microsoft did not say how heavy the Data Box Heavy is, though.

Also new is the Azure Data Box Edge, which is now also in preview. In many ways, this is the most interesting of the additions since it goes well beyond transporting data. As the name implies, Data Box Edge is meant for edge deployments where a company collects data. What makes this version stand out is that it’s basically a small data center rack that lets you process data as it comes in. It even includes an FPGA to run AI algorithms at the edge.

Using this box, enterprises can collect the data, transform and analyze it on the box, and then send it to Azure over the network (and not in a truck). Using this, users can cut back on bandwidth cost and don’t have to send all of their data to the cloud for processing.

Also part of the same Data Box family is the Data Box Gateway. This is a virtual appliance, however, that runs on Hyper-V and VMWare and lets users create a data transfer gateway for importing data in Azure. That’s not quite as interesting as a hardware appliance but useful nonetheless.

more Microsoft Ignite 2018 coverage

Sep
24
2018
--

Microsoft Teams gets bokeh and meeting recordings with transcripts

If you’ve ever attended a video meeting and wished that the speakers used really expensive cameras and lenses that allowed for that soft classy background blur of a portrait photo, then Microsoft wants to make that wish come true. The company announced a number of updates to Microsoft Teams today, and one of those is a feature that automatically detects faces and blurs the background behind a speaker.

While background blur is nice (or at least we have to assume it will be because we haven’t been able to try it yet), the more useful new feature in Teams is intelligent recordings. Teams can now automatically generate captions and provide time-coded transcripts for the replays. This feature is coming to Office 365 commercial customers now.

Microsoft first demoed these new transcription capabilities at its Build developer conference earlier this year. In that demo, the transcription service was able to distinguish between speakers and create a real-time transcript of the meeting.

If you want to create live streams and on-demand video for a wider audience inside your company, Teams is also getting that capability next month, together with Microsoft Stream and Yammer (which seems to be lingering in the shadow of Teams these days).

more Microsoft Ignite 2018 coverage

Sep
18
2018
--

Microsoft launches new AI applications for customer service and sales

Like virtually every other major tech company, Microsoft is currently on a mission to bring machine learning to all of its applications. It’s no surprise then that it’s also bringing ‘AI’ to its highly profitable Dynamics 365 CRM products. A year ago, the company introduced its first Dynamics 365 AI solutions and today it’s expanding this portfolio with the launch of three new products: Dynamics 365 AI for Sales, Customer Service and Market Insights.

“Many people, when they talk about CRM, or ERP of old, they referred to them as systems of oppression, they captured data,” said Alysa Taylor, Microsoft corporate VP for business applications and industry. “But they didn’t provide any value back to the end user — and what that end user really needs is a system of empowerment, not oppression.”

It’s no secret that few people love their CRM systems (except for maybe a handful of Dreamforce attendees), but ‘system of oppression’ is far from the ideal choice of words here. Yet Taylor is right that early systems often kept data siloed. Unsurprisingly, Microsoft argues that Dynamics 365 does not do that, allowing it to now use all of this data to build machine learning-driven experiences for specific tasks.

Dynamics 365 AI for Sales, unsurprisingly, is meant to help sales teams get deeper insights into their prospects using sentiment analysis. That’s obviously among the most basic of machine learning applications these days, but AI for Sales also helps these salespeople understand what actions they should take next and which prospects to prioritize. It’ll also help managers coach their individual sellers on the actions they should take.

Similarly, the Customer Service app focuses on using natural language understanding to understand and predict customer service problems and leverage virtual agents to lower costs. Taylor used this part of the announcement to throw some shade at Microsoft’s competitor Salesforce. “Many, many vendors offer this, but they offer it in a way that is very cumbersome for organizations to adopt,” she said. “Again, it requires a large services engagement, Salesforce partners with IBM Watson to be able to deliver on this. We are now out of the box.”

Finally, Dynamics 365 AI for Market Insights does just what the name implies: it provides teams with data about social sentiment, but this, too, goes a bit deeper. “This allows organizations to harness the vast amounts of social sentiment, be able to analyze it, and then take action on how to use these insights to increase brand loyalty, as well as understand what newsworthy events will help provide different brand affinities across an organization,” Taylor said. So the next time you see a company try to gin up some news, maybe it did so based on recommendations from Office 365 AI for Market Insights.

Sep
15
2018
--

Why the Pentagon’s $10 billion JEDI deal has cloud companies going nuts

By now you’ve probably heard of the Defense Department’s massive winner-take-all $10 billion cloud contract dubbed the Joint Enterprise Defense Infrastructure (or JEDI for short).
Star Wars references aside, this contract is huge, even by government standards.The Pentagon would like a single cloud vendor to build out its enterprise cloud, believing rightly or wrongly that this is the best approach to maintain focus and control of their cloud strategy.

Department of Defense (DOD) spokesperson Heather Babb tells TechCrunch the department sees a lot of upside by going this route. “Single award is advantageous because, among other things, it improves security, improves data accessibility and simplifies the Department’s ability to adopt and use cloud services,” she said.

Whatever company they choose to fill this contract, this is about modernizing their computing infrastructure and their combat forces for a world of IoT, artificial intelligence and big data analysis, while consolidating some of their older infrastructure. “The DOD Cloud Initiative is part of a much larger effort to modernize the Department’s information technology enterprise. The foundation of this effort is rationalizing the number of networks, data centers and clouds that currently exist in the Department,” Babb said.

Setting the stage

It’s possible that whoever wins this DOD contract could have a leg up on other similar projects in the government. After all it’s not easy to pass muster around security and reliability with the military and if one company can prove that they are capable in this regard, they could be set up well beyond this one deal.

As Babb explains it though, it’s really about figuring out the cloud long-term. “JEDI Cloud is a pathfinder effort to help DOD learn how to put in place an enterprise cloud solution and a critical first step that enables data-driven decision making and allows DOD to take full advantage of applications and data resources,” she said.

Photo: Mischa Keijser for Getty Images

The single vendor component, however, could explain why the various cloud vendors who are bidding, have lost their minds a bit over it — everyone except Amazon, that is, which has been mostly silent, happy apparently to let the process play out.

The belief amongst the various other players, is that Amazon is in the driver’s seat for this bid, possibly because they delivered a $600 million cloud contract for the government in 2013, standing up a private cloud for the CIA. It was a big deal back in the day on a couple of levels. First of all, it was the first large-scale example of an intelligence agency using a public cloud provider. And of course the amount of money was pretty impressive for the time, not $10 billion impressive, but a nice contract.

For what it’s worth, Babb dismisses such talk, saying that the process is open and no vendor has an advantage. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said.

Complaining loudly

As the Pentagon moves toward selecting its primary cloud vendor for the next decade, Oracle in particular has been complaining to anyone who will listen that Amazon has an unfair advantage in the deal, going so far as to file a formal complaint last month, even before bids were in and long before the Pentagon made its choice.

Photo: mrdoomits for Getty Images (cropped)

Somewhat ironically, given their own past business model, Oracle complained among other things that the deal would lock the department into a single platform over the long term. They also questioned whether the bidding process adhered to procurement regulations for this kind of deal, according to a report in the Washington Post. In April, Bloomberg reported that co-CEO Safra Catz complained directly to the president that the deal was tailor made for Amazon.

Microsoft hasn’t been happy about the one-vendor idea either, pointing out that by limiting itself to a single vendor, the Pentagon could be missing out on innovation from the other companies in the back and forth world of the cloud market, especially when we’re talking about a contract that stretches out for so long.

As Microsoft’s Leigh Madden told TechCrunch in April, the company is prepared to compete, but doesn’t necessarily see a single vendor approach as the best way to go. “If the DOD goes with a single award path, we are in it to win, but having said that, it’s counter to what we are seeing across the globe where 80 percent of customers are adopting a multi-cloud solution,” he said at the time.

He has a valid point, but the Pentagon seems hell bent on going forward with the single vendor idea, even though the cloud offers much greater interoperability than proprietary stacks of the 1990s (for which Oracle and Microsoft were prime examples at the time).

Microsoft has its own large DOD contract in place for almost a billion dollars, although this deal from 2016 was for Windows 10 and related hardware for DOD employees, rather than a pure cloud contract like Amazon has with the CIA.

It also recently released Azure Stack for government, a product that lets government customers install a private version of Azure with all the same tools and technologies you find in the public version, and could prove attractive as part of its JEDI bid.

Cloud market dynamics

It’s also possible that the fact that Amazon controls the largest chunk of the cloud infrastructure market, might play here at some level. While Microsoft has been coming fast, it’s still about a third of Amazon in terms of market size, as Synergy Research’s Q42017 data clearly shows.

The market hasn’t shifted dramatically since this data came out. While market share alone wouldn’t be a deciding factor, Amazon came to market first and it is much bigger in terms of market than the next four combined, according to Synergy. That could explain why the other players are lobbying so hard and seeing Amazon as the biggest threat here, because it’s probably the biggest threat in almost every deal where they come up against each other, due to its sheer size.

Consider also that Oracle, which seems to be complaining the loudest, was rather late to the cloud after years of dismissing it. They could see JEDI as a chance to establish a foothold in government that they could use to build out their cloud business in the private sector too.

10 years might not be 10 years

It’s worth pointing out that the actual deal has the complexity and opt-out clauses of a sports contract with just an initial two-year deal guaranteed. A couple of three-year options follow, with a final two-year option closing things out. The idea being, that if this turns out to be a bad idea, the Pentagon has various points where they can back out.

Photo: Henrik Sorensen for Getty Images (cropped)

In spite of the winner-take-all approach of JEDI, Babb indicated that the agency will continue to work with multiple cloud vendors no matter what happens. “DOD has and will continue to operate multiple clouds and the JEDI Cloud will be a key component of the department’s overall cloud strategy. The scale of our missions will require DOD to have multiple clouds from multiple vendors,” she said.

The DOD accepted final bids in August, then extended the deadline for Requests for Proposal to October 9th. Unless the deadline gets extended again, we’re probably going to finally hear who the lucky company is sometime in the coming weeks, and chances are there is going to be lot of whining and continued maneuvering from the losers when that happens.

Sep
06
2018
--

PagerDuty raises $90M to wake up more engineers in the middle of the night

PagerDuty, the popular service that helps businesses monitor their tech stacks, manage incidents and alert engineers when things go sideways, today announced that it has raised a $90 million Series D round at a valuation of $1.3 billion. With this, PagerDuty, which was founded in 2009, has now raised well over $170 million.

The round was led by T. Rowe Price Associates and Wellington Management . Accel, Andreessen Horowitz and Bessemer Venture Partners participated. Given the leads in this round, chances are that PagerDuty is gearing up for an IPO.

“This capital infusion allows us to continue our investments in innovation that leverages artificial intelligence and machine learning, enabling us to help our customers transform their companies and delight their customers,” said Jennifer Tejada, CEO at PagerDuty in today’s announcement. “From a business standpoint, we can strengthen our investment in and development of our people, our most valuable asset, as we scale our operations globally. We’re well positioned to make the lives of digital workers better by elevating work to the outcomes that matter.”

Currently PagerDuty users include the likes of GE, Capital One, IBM, Spotify and virtually every other software company you’ve ever heard of. In total, more than 10,500 enterprises now use the service. While it’s best known for its alerting capabilities, PagerDuty has expanded well beyond that over the years, though it’s still a core part of its service. Earlier this year, for example, the company announced its new AIOps services that aim to help businesses reduce the amount of noisy and unnecessary alerts. I’m sure there’s a lot of engineers who are quite happy about that (and now sleep better).

Sep
06
2018
--

Microsoft commits to fixing custom apps broken by Windows 10 upgrades

Microsoft wants to make life easier for enterprise customers. Starting today, it is committing to fix any custom applications that may break as a result of updates to Windows 10 or the Office 365 product suite.

Most large companies have a series of custom applications that play a crucial role inside their organizations. When you update Windows and Office 365, Murphy’s Law of updates says one or more of those applications is going to break.

Up until this announcement when that inevitably happened, it was entirely the problem of the customer. Microsoft has taken a huge step today by promising to help companies understand which applications will likely break when you install updates, and working to help fix them if it ultimately happens anyway.

One of the reasons the company can afford to be so generous is they have data that suggests the vast majority of applications won’t break when customers move from Windows 7 to Windows 10. “Using millions of data points from customer diagnostic data and the Windows Insider validation process, we’ve found that 99 percent of apps are compatible with new Windows updates,” Microsoft’s Jared Spataro wrote in a blog post announcing these programs.

To that end, they have a new tool called Desktop Deployment Analytics, which creates a map of your applications and predicts using artificial intelligence which of them are most likely to have problems with the update.

“You now have the ability with the cloud to have intelligence in how you manage these end points and get smart recommendations around how you deploy Windows,” Spataro, who is corporate vice president of Microsoft 365, told TechCrunch.

Even with that kind of intelligence-driven preventive approach, things still break, and that’s where the next program, Desktop App Assure, comes into play. It’s a service designed to address any application compatibility issues with Windows 10 and Office 365 ProPlus. In fact, Microsoft has promised to assign an engineer to a company to fix anything that breaks, even if it’s unique to a particular organization.

That’s quite a commitment, and Spataro recognizes that there will be plenty of skeptics where this program in particular is concerned. He says that it’s up to Microsoft to deliver what it’s promised.

Over the years, organizations have spent countless resources getting applications to work after Windows updates, sometimes leaving older versions in place for years to avoid incompatibility problems. These programs theoretically completely remove that pain point from the equation, placing the burden to fix the applications squarely on Microsoft.

“We will look to make changes in Windows or Office before we ask you to make changes in your custom application,” Spataro says, but if that doesn’t solve it, they have committed to helping you fix it.

Finally, the company heard a lot of complaints from customers when they announced they were ending extended support for Windows 7 in 2020. Spataro said Microsoft listened to its customers, and has now extended paid support until 2024, letting companies change at their own pace. Theoretically, however, if they can assure customers that updating won’t break things, and they will commit to fixing them if that happens, it should help move customers to Windows 10, which appears to be the company’s goal here.

They also made changes to the standard support and update cadence for Windows 10 and Office 365:

All of these programs appear to be a major shift in how Microsoft has traditionally done business, showing a much stronger commitment to servicing the requirements of enterprise customers, while shifting the cost of fixing custom applications from the customer to Microsoft when updates to its core products cause issues. But they have done so knowing that they can help prevent a lot of those incompatibility problems before they happen, making it easier to commit to this type of program.

Aug
29
2018
--

Google takes a step back from running the Kubernetes development infrastructure

Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community. These credits will be split over three years and are meant to cover the infrastructure costs of building, testing and distributing the Kubernetes software.

Why does this matter? Until now, Google hosted virtually all the cloud resources that supported the project, like its CI/CD testing infrastructure, container downloads and DNS services on its cloud. But Google is now taking a step back. With the Kubernetes community reaching a state of maturity, Google is transferring all of this to the community.

Between the testing infrastructure and hosting container downloads, the Kubernetes project regularly runs more than 150,000 containers on 5,000 virtual machines, so the cost of running these systems quickly adds up. The Kubernetes container registry has served almost 130 million downloads since the launch of the project.

It’s also worth noting that the CNCF now includes a wide range of members that typically compete with each other. We’re talking Alibaba Cloud, AWS, Microsoft Azure, Google Cloud, IBM Cloud, Oracle, SAP and VMware, for example. All of these profit from the work of the CNCF and the Kubernetes community. Google doesn’t say so outright, but it’s fair to assume that it wanted others to shoulder some of the burdens of running the Kubernetes infrastructure, too. Similarly, some of the members of the community surely didn’t want to be so closely tied to Google’s infrastructure, either.

“By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project operations,” Google Kubernetes Engine product manager William Deniss writes in today’s announcement. He also notes that a number of Google’s will still be involved in running the Kubernetes infrastructure.

“Google’s significant financial donation to the Kubernetes community will help ensure that the project’s constant pace of innovation and broad adoption continue unabated,” said Dan Kohn, the executive director of the CNCF. “We’re thrilled to see Google Cloud transfer management of the Kubernetes testing and infrastructure projects into contributors’ hands — making the project not just open source, but openly managed, by an open community.”

It’s unclear whether the project plans to take some of the Google-hosted infrastructure and move it to another cloud, but it could definitely do so — and other cloud providers could step up and offer similar credits, too.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com