Jan
14
2019
--

Salesforce Commerce Cloud updates keep us shopping with AI-fueled APIs

As people increasingly use their mobile phones and other devices to shop, it has become imperative for vendors to improve the shopping experience, making it as simple as possible, given the small footprint. One way to do that is using artificial intelligence. Today, Salesforce announced some AI-enhanced APIs designed to keep us engaged as shoppers.

For starters, the company wants to keep you shopping. That means providing an intelligent recommendation engine. If you searched for a particular jacket, you might like these similar styles, or this scarf and gloves. That’s fairly basic as shopping experiences go, but Salesforce didn’t stop there. It’s letting developers embed this ability to recommend products in any app, whether that’s maps, social or mobile.

That means shopping recommendations could pop up anywhere developers think it makes sense, like on your maps app. Whether consumers see this as a positive thing, Salesforce says when you add intelligence to the shopping experience, it increases sales anywhere from 7-16 percent, so however you feel about it, it seems to be working.

The company also wants to make it simple to shop. Instead of entering a multi-faceted search, as has been the traditional way of shopping in the past — footwear, men’s, sneakers, red — you can take a picture of a sneaker (or anything you like) and the visual search algorithm should recognize it and make recommendations based on that picture. It reduces data entry for users, which is typically a pain on the mobile device, even if it has been simplified by checkboxes.

Salesforce has also made inventory availability as a service, allowing shoppers to know exactly where the item they want is available in the world. If they want to pick up in-store that day, it shows where the store is on a map and could even embed that into your ridesharing app to indicate exactly where you want to go. The idea is to create this seamless experience between consumer desire and purchase.

Finally, Salesforce has added some goodies to make developers happy, too, including the ability to browse the Salesforce API library and find the ones that make the most sense for what they are creating. This includes code snippets to get started. It may not seem like a big deal, but as companies the size of Salesforce increase their API capabilities (especially with the MuleSoft acquisition), it’s harder to know what’s available. The company has also created a sandboxing capability to let developers experiment and build capabilities with these APIs in a safe way.

The basis of Commerce Cloud is Demandware, the company Salesforce acquired two years ago for $2.8 billion. Salesforce’s intelligence platform is called Einstein. In spite of its attempt to personify the technology, it’s really about bringing artificial intelligence across the Salesforce platform of products, as it has with today’s API announcements.

Jan
09
2019
--

AWS gives open source the middle finger

AWS launched DocumentDB today, a new database offering that is compatible with the MongoDB API. The company describes DocumentDB as a “fast, scalable, and highly available document database that is designed to be compatible with your existing MongoDB applications and tools.” In effect, it’s a hosted drop-in replacement for MongoDB that doesn’t use any MongoDB code.

AWS argues that while MongoDB is great at what it does, its customers have found it hard to build fast and highly available applications on the open-source platform that can scale to multiple terabytes and hundreds of thousands of reads and writes per second. So what the company did was build its own document database, but made it compatible with the Apache 2.0 open source MongoDB 3.6 API.

If you’ve been following the politics of open source over the last few months, you’ll understand that the optics of this aren’t great. It’s also no secret that AWS has long been accused of taking the best open-source projects and re-using and re-branding them without always giving back to those communities.

The wrinkle here is that MongoDB was one of the first companies that aimed to put a stop to this by re-licensing its open-source tools under a new license that explicitly stated that companies that wanted to do this had to buy a commercial license. Since then, others have followed.

“Imitation is the sincerest form of flattery, so it’s not surprising that Amazon would try to capitalize on the popularity and momentum of MongoDB’s document model,” MongoDB CEO and president Dev Ittycheria told us. “However, developers are technically savvy enough to distinguish between the real thing and a poor imitation. MongoDB will continue to outperform any impersonations in the market.”

That’s a pretty feisty comment. Last November, Ittycheria told my colleague Ron Miller that he believed that AWS loved MongoDB because it drives a lot of consumption. In that interview, he also noted that “customers have spent the last five years trying to extricate themselves from another large vendor. The last thing they want to do is replay the same movie.”

MongoDB co-founder and CTO Eliot Horowitz echoed this. “In order to give developers what they want, AWS has been pushed to offer an imitation MongoDB service that is based on the MongoDB code from two years ago,” he said. “Our entire company is focused on one thing — giving developers the best way to work with data with the freedom to run anywhere. Our commitment to that single mission will continue to differentiate the real MongoDB from any imitation products that come along.”

A company spokesperson for MongoDB also highlighted that the 3.6 API that DocumentDB is compatible with is now two years old and misses most of the newest features, including ACID transactions, global clusters and mobile sync.

To be fair, AWS has become more active in open source lately and, in a way, it’s giving developers what they want (and not all developers are happy with MongoDB’s own hosted service). Bypassing MongoDB’s licensing by going for API comparability, given that AWS knows exactly why MongoDB did that, was always going to be a controversial move and won’t endear the company to the open-source community.

Jan
09
2019
--

Baidu Cloud launches its open-source edge computing platform

At CES, the Chinese tech giant Baidu today announced OpenEdge, its open-source edge computing platform. At its core, OpenEdge is the local package component of Baidu’s existing Intelligent Edge (BIE) commercial offering and obviously plays well with that service’s components for managing edge nodes and apps.

Because this is obviously a developer announcement, I’m not sure why Baidu decided to use CES as the venue for this release, but there can be no doubt that China’s major tech firms have become quite comfortable with open source. Companies like Baidu, Alibaba, Tencent and others are often members of the Linux Foundation and its growing stable of projects, for example, and virtually ever major open-source organization now looks to China as its growth market. It’s no surprise, then, that we’re also now seeing a wider range of Chinese companies that open source their own projects.

“Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy,” says Baidu VP and GM of Baidu Cloud Watson Yin. “By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage and ultimately brings real-time and immersive experiences to end users. And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications.”

A company spokesperson tells us that the open-source platform will include features like data collection, message distribution and AI inference, as well as tools for syncing with the cloud.

Baidu also today announced that it has partnered with Intel to launch the BIE-AI-Box and with NXP Semiconductors to launch the BIE-AI-Board. The box is designed for in-vehicle video analysis while the board is small enough for cameras, drones, robots and similar applications.

CES 2019 coverage - TechCrunch

Jan
07
2019
--

GitHub Free users now get unlimited private repositories

If you’re a GitHub user, but you don’t pay, this is a good week. Historically, GitHub always offered free accounts but the caveat was that your code had to be public. To get private repositories, you had to pay. Starting tomorrow, that limitation is gone. Free GitHub users now get unlimited private projects with up to three collaborators.

The amount of collaborators is really the only limitation here and there’s no change to how the service handles public repositories, which can still have unlimited collaborators.

This feels like a sign of goodwill on behalf of Microsoft, which closed its acquisition of GitHub last October, with former Xamarin CEO Nat Friedman taking over as GitHub’s CEO. Some developers were rather nervous about the acquisition (though it feels like most have come to terms with it). It’s also a fair guess to assume that GitHub’s model for monetizing the service is a bit different from Microsoft’s. Microsoft doesn’t need to try to get money from small teams — that’s not where the bulk of its revenue comes from. Instead, the company is mostly interested in getting large enterprises to use the service.

Talking about teams, GitHub also today announced that it is changing the name of the GitHub Developer suite to ‘GitHub Pro.’ The company says it’s doing so in order to “help developers better identify the tools they need.”

But what’s maybe even more important is that GitHub Business Cloud and GitHub Enterprise (now called Enterprise Cloud and Enterprise Server) have become one and are now sold under the ‘GitHub Enterprise’ label and feature per-user pricing.

The announcement of free private repositories probably took some of GitHub’s competitors by surprise, but here is what we heard from GitLab CEO Sid Sijbrandij: “GitHub today announced the launch of free private repositories with up to three collaborators. GitLab has offered unlimited collaborators on private repositories since the beginning.We believe Microsoft is focusing more on generating revenue with Azure and less on charging for DevOps software. At GitLab, we believe in a multi-cloud future where organizations use multiple public cloud platforms.”

Note: this story was scheduled for tomorrow, but due to a broken embargo, we decided to publish today. The updates will go live tomorrow.

Jan
05
2019
--

How Trulia began paying down its technical debt

As every software company knows, over time as code ages and workarounds build on work-arounds, the code base becomes bloated. It becomes ever more difficult to get around the technical debt that you’ve built up over time. It’s really impossible to avoid this phenomenon, but at some point, companies realize that the debt is so great that it’s limiting their ability to build new functionality. That’s precisely what Trulia faced in 2017 when it began a process of paying down that debt and modernizing its architecture.

Trulia is a real estate site founded way back in 2005, an eternity ago in terms of technology. The company went public in 2012 and was acquired by Zillow in 2014 for $3.5 billion, but has continued to operate as an independent brand under the Zillow umbrella. It understood that a lot had changed technologically in the 12 years since its inception when engineering began thinking about this. The team knew it had a humongous, monolithic code base that was inhibiting the ability to update the site.

While they tried to pull out some of the newer functions as services, it didn’t really make the site any more nimble because these services always had to tie back into that monolithic central code base. The development team knew if it was to escape this coding trap, it would take a complete overhaul.

Brainstorming broad change

As you would expect, a process like this doesn’t happen overnight, taking months to plan and implement. It all started back in 2017 when the company held what they called an “Innovation Week” with the entire engineering team. Groups of engineers came up with ideas about how to solve this problem, but the one that got the most attention was one called Project Islands, which involved breaking out the different pieces of the site as individual coding islands that could operate independently of one another.

It sounds simple, but in practice it involved breaking down the entire code base into services. They would use Next.js and React to rebuild the front end and GraphQL, an open source graph database technology to rebuild the back end.

Deep Varma, Trulia’s VP of engineering, pointed out that as a company founded in 2005, the site was built on PHP and MySQL, two popular development technologies from that time. Varma says that whenever his engineers made a change to any part of the site, they needed to do a complete system release. This caused a major bottleneck.

What they really needed to do was move to a completely modern microservices architecture that allowed engineering teams to work independently in a continuous delivery approach without breaking any other team’s code. That’s where the concept of islands came into play.

Islands in the stream

The islands were actually microservices. Each one could communicate to a set of central common services like authentication, A/B testing, the navigation bar, the footer — all of the pieces that every mini code base would need, while allowing the teams building these islands to work independently and not require a huge rebuild every time they added a new element or changed something.

Cousine island. Seychelles. Photo: Martin Harvey/Getty Images

The harsh reality of this kind of overhaul came into focus as the teams realized they had to be writing the new pieces while the old system was still in place and running. In a video the company made describing the effort, one engineer likened it to changing the engine of a 747 in the middle of a flight.

Varma says he didn’t try to do everything at once, as he needed to see if the islands approach would work in practice first. In November 2017, he pulled the first engineering team together, and by January it had built the app shell (the common services piece) and one microservice island. When the proof of concept succeeded, Varma knew they were in business.

Building out the archipelago

It’s one thing to build a single island, but it’s another matter to build a chain of them and that would be the next step. By last April, engineering had shown enough progress that they were able to present the entire idea to senior management and get the go-ahead to move forward with a more complex project.

Photo of Rock Islands, Palau, Micronesia: J.W.Alker/Getty Images

First, it took some work with the Next.js development team to get the development framework to work the way they wanted. Varma said he brought in the Next.js team to work with his engineers. He said that they needed to figure out how to stitch the various islands together and resolve dependencies among the different services. The Next.js team actually changed its development roadmap for Trulia, speeding up delivery of these requirements, understanding that other companies would have similar issues.

By last July, the company released Neighborhoods, the first fully independent island functionality on the site. Recently, it moved off-market properties to islands. Off-market properties, as the name implies, are pages with information about properties that are no longer on the market. Varma says that these pages actually make up a significant portion of the company’s traffic.

While Varma would not say just how much of the site has been moved to islands at this point, he said the goal is to move the majority to the new platform in 2019. All of this shows that a complete overhaul of a complex site doesn’t happen overnight, but Trulia is taking steps to move off the original system it created in 2005 and move to a more modern and flexible architecture it has created with islands. It may not have paid down its technical debt in full in 2018, but it went a long way on laying the foundation to do so.

Dec
20
2018
--

Cinven acquires One.com, one of Europe’s biggest hosting providers with 1.5M customers

One of the biggest providers of domain names and web hosting in Europe is changing hands today. One.com, which has around 1.5 million customers mainly across the north of the region, has been sold by private equity firm Accel-KKR to Cinven, another PE player that focuses on investments in Europe.

Terms of the deal are not being disclosed, but as a rough guide, Cinven once owned and sold another European hosting provider of comparable size: it acquired Host Europe Group in 2013 for $668 million and then sold it in 2016 for $1.8 billion to GoDaddy two years ago almost to the day. At the time of the sale, Host Europe Group also had about 1.5 million customers.

One.com and its business segment represent a significant, if not wildly evolving, part of the tech landscape: for as long as businesses and consumers continue to use the web, there will be a need for companies who sell and host domain names and provide services around that.

With a catchy domain name of its own, One.com has been riding the wave of that solidity of purpose for several years already. KKR-Accel says that organic growth at the company has been accelerating at a rate of 20 percent and that revenues under its four-year ownership doubled to €60 million ($69 million) with profitability growing 50x on a marketing pitch in which it positions itself as the ‘budget’ option to businesses.

“The vision of One.com since its founding has been to deliver value-added and easy-to-use solutions to small- and medium-sized businesses and prosumers,” said Jacob Jensen, Founder and CEO of One.com, in a statement. He is staying on to continue leading the company.

Cinven says it is interested in growth the business by way of acquisition, specifically: “There are opportunities to accelerate the growth of the business organically and through acquisition.”

In other words, expect some consolidation moves in the future where some of the smaller providers in Europe potentially get gobbled up to create a bigger entity with better economies of scale. That’s needed not just because GoDaddy has ramped up its presence here, but because the likes of Amazon has only grown in stature and provides a number of other services to users to make its offerings more sticky.

“We are very excited to invest in One.com alongside Jacob. It is a high quality business with an attractive brand and scalable technology platform, operating in a market with structural growth drivers,” said Thomas Railhac, Partner at Cinven, in a statement. “This is a subsector we know well through Cinven’s successful investment in HEG in Fund 5, continuing to invest in both the organic growth story and targeted acquisitions.”

Dec
19
2018
--

Google’s Cloud Spanner database adds new features and regions

Cloud Spanner, Google’s globally distributed relational database service, is getting a bit more distributed today with the launch of a new region and new ways to set up multi-region configurations. The service is also getting a new feature that gives developers deeper insights into their most resource-consuming queries.

With this update, Google is adding to the Cloud Spanner lineup Hong Kong (asia-east2), its newest data center location. With this, Cloud Spanner is now available in 14 out of 18 Google Cloud Platform (GCP) regions, including seven the company added this year alone. The plan is to bring Cloud Spanner to every new GCP region as they come online.

The other new region-related news is the launch of two new configurations for multi-region coverage. One, called eur3, focuses on the European Union, and is obviously meant for users there who mostly serve a local customer base. The other is called nam6 and focuses on North America, with coverage across both costs and the middle of the country, using data centers in Oregon, Los Angeles, South Carolina and Iowa. Previously, the service only offered a North American configuration with three regions and a global configuration with three data centers spread across North America, Europe and Asia.

While Cloud Spanner is obviously meant for global deployments, these new configurations are great for users who only need to serve certain markets.

As far as the new query features are concerned, Cloud Spanner is now making it easier for developers to view, inspect and debug queries. The idea here is to give developers better visibility into their most frequent and expensive queries (and maybe make them less expensive in the process).

In addition to the Cloud Spanner news, Google Cloud today announced that its Cloud Dataproc Hadoop and Spark service now supports the R language, in addition to Python 3.7 support on App Engine.

Dec
19
2018
--

Dataiku raises $101 million for its collaborative data science platform

Dataiku wants to turn buzzwords into an actual service. The company has been focused on data tools for many years, before everybody started talking about big data, data science and machine learning.

And the company just raised $101 million in a round led by Iconiq Capital, with Alven Capital, Battery Ventures, Dawn Capital and FirstMark Capital also participating.

If you’re generating a lot of data, Dataiku helps you find a meaning behind data sets. First, you import your data by connecting Dataiku to your storage system. The platform supports dozens of database formats and sources — Hadoop, NoSQL, images, you name it.

You can then use Dataiku to visualize your data, clean your data set, run some algorithms on your data in order to build a machine learning model, deploy it and more. Dataiku has a visual coding tool, or you can use your own code.

But Dataiku isn’t just a tool for data scientists. Even if you’re a business analyst, you can visualize and extract data from Dataiku directly. And because of its software-as-a-service approach, your entire team of data scientists and data analysts can collaborate on Dataiku.

Clients use it to track churn, detect fraud, forecast demand, optimize lifetime values and more. Customers include General Electric, Sephora, Unilever, KUKA, FOX and BNP Paribas.

With today’s funding round, the company plans to double its staff. The company currently works with 200 people in New York, Paris and London. It plans to open offices in Singapore and Sydney, as well.

Dec
13
2018
--

They scaled YouTube — now they’ll shard everyone with PlanetScale

When the former CTOs of YouTube, Facebook and Dropbox seed fund a database startup, you know there’s something special going on under the hood. Jiten Vaidya and Sugu Sougoumarane saved YouTube from a scalability nightmare by inventing and open-sourcing Vitess, a brilliant relational data storage system. But in the decade since working there, the pair have been inundated with requests from tech companies desperate for help building the operational scaffolding needed to actually integrate Vitess.

So today the pair are revealing their new startup PlanetScale that makes it easy to build multi-cloud databases that handle enormous amounts of information without locking customers into Amazon, Google or Microsoft’s infrastructure. Battle-tested at YouTube, the technology could allow startups to fret less about their backend and focus more on their unique value proposition. “Now they don’t have to reinvent the wheel” Vaidya tells me. “A lot of companies facing this scaling problem end up solving it badly in-house and now there’s a way to solve that problem by using us to help.”

PlanetScale quietly raised a $3 million seed round in April, led by SignalFire and joined by a who’s who of engineering luminaries. They include YouTube co-founder and CTO Steve Chen, Quora CEO and former Facebook CTO Adam D’Angelo, former Dropbox CTO Aditya Agarwal, PayPal and Affirm co-founder Max Levchin, MuleSoft co-founder and CTO Ross Mason, Google director of engineering Parisa Tabriz and Facebook’s first female engineer and South Park Commons founder Ruchi Sanghvi. If anyone could foresee the need for Vitess implementation services, it’s these leaders, who’ve dealt with scaling headaches at tech’s top companies.

But how can a scrappy startup challenge the tech juggernauts for cloud supremacy? First, by actually working with them. The PlanetScale beta that’s now launching lets companies spin up Vitess clusters on its database-as-a-service, their own through a licensing deal, or on AWS with Google Cloud and Microsoft Azure coming shortly. Once these integrations with the tech giants are established, PlanetScale clients can use it as an interface for a multi-cloud setup where they could keep their data master copies on AWS US-West with replicas on Google Cloud in Ireland and elsewhere. That protects companies from becoming dependent on one provider and then getting stuck with price hikes or service problems.

PlanetScale also promises to uphold the principles that undergirded Vitess. “It’s our value that we will keep everything in the query pack completely open source so none of our customers ever have to worry about lock-in” Vaidya says.

PlanetScale co-founders (from left): Jiten Vaidya and Sugu Sougoumarane

Battle-tested, YouTube-approved

He and Sougoumarane met 25 years ago while at Indian Institute of Technology Bombay. Back in 1993 they worked at pioneering database company Informix together before it flamed out. Sougoumarane was eventually hired by Elon Musk as an early engineer for X.com before it got acquired by PayPal, and then left for YouTube. Vaidya was working at Google and the pair were reunited when it bought YouTube and Sougoumarane pulled him on to the team.

“YouTube was growing really quickly and the relationship database they were using with MySQL was sort of falling apart at the seams,” Vaidya recalls. Adding more CPU and memory to the database infra wasn’t cutting it, so the team created Vitess. The horizontal scaling sharding middleware for MySQL let users segment their database to reduce memory usage while still being able to rapidly run operations. YouTube has smoothly ridden that infrastructure to 1.8 billion users ever since.

“Sugu and Mike Solomon invented and made Vitess open source right from the beginning since 2010 because they knew the scaling problem wasn’t just for YouTube, and they’ll be at other companies five or 10 years later trying to solve the same problem,” Vaidya explains. That proved true, and now top apps like Square and HubSpot run entirely on Vitess, with Slack now 30 percent onboard.

Vaidya left YouTube in 2012 and became the lead engineer at Endorse, which got acquired by Dropbox, where he worked for four years. But in the meantime, the engineering community strayed toward MongoDB-style non-relational databases, which Vaidya considers inferior. He sees indexing issues and says that if the system hiccups during an operation, data can become inconsistent — a big problem for banking and commerce apps. “We think horizontally scaled relationship databases are more elegant and are something enterprises really need.

Database legends reunite

Fed up with the engineering heresy, a year ago Vaidya committed to creating PlanetScale. It’s composed of four core offerings: professional training in Vitess, on-demand support for open-source Vitess users, Vitess database-as-a-service on PlanetScale’s servers and software licensing for clients that want to run Vitess on premises or through other cloud providers. It lets companies re-shard their databases on the fly to relocate user data to comply with regulations like GDPR, safely migrate from other systems without major codebase changes, make on-demand changes and run on Kubernetes.

The PlanetScale team

PlanetScale’s customers now include Indonesian e-commerce giant Bukalapak, and it’s helping Booking.com, GitHub and New Relic migrate to open-source Vitess. Growth is suddenly ramping up due to inbound inquiries. Last month around when Square Cash became the No. 1 app, its engineering team published a blog post extolling the virtues of Vitess. Now everyone’s seeking help with Vitess sharding, and PlanetScale is waiting with open arms. “Jiten and Sugu are legends and know firsthand what companies require to be successful in this booming data landscape,” says Ilya Kirnos, founding partner and CTO of SignalFire.

The big cloud providers are trying to adapt to the relational database trend, with Google’s Cloud Spanner and Cloud SQL, and Amazon’s AWS SQL and AWS Aurora. Their huge networks and marketing war chests could pose a threat. But Vaidya insists that while it might be easy to get data into these systems, it can be a pain to get it out. PlanetScale is designed to give them freedom of optionality through its multi-cloud functionality so their eggs aren’t all in one basket.

Finding product market fit is tough enough. Trying to suddenly scale a popular app while also dealing with all the other challenges of growing a company can drive founders crazy. But if it’s good enough for YouTube, startups can trust PlanetScale to make databases one less thing they have to worry about.

Dec
13
2018
--

New tool uses AI to roll back problematic continuous delivery builds automatically

As companies shift to CI/CD (continuous integration/continuous delivery), they face a problem around monitoring and fixing problems in builds that have been deployed. How do you deal with an issue after moving onto the next delivery milestone? Harness, the startup launched last year by AppDynamics founder Jyoti Bansal, wants to fix that with a new tool called 24×7 Service Guard.

The new tool is designed to help companies working with a continuous delivery process by monitoring all of the builds, regardless of when they were launched. What’s more, the company claims that using AI and machine learning, it can dial back a problematic build to one that worked in an automated fashion, freeing developers and operations to keep working without worry.

The company launched last year with a tool called Continuous Verification to verify that a continuous delivery build got deployed. With today’s announcement, Bansal says the company is taking this to another level to help understand what happens after you deploy.

The tool watches every build, even days after deployment, taking advantage of data from tools like AppDynamics, New Relic, Elastic and Splunk, then using AI and machine learning to identify problems and bring them back to a working state without human intervention. What’s more, your team can get a unified view of performance and the quality of every build across all of your monitoring and logging tools.

“People are doing Continuous Delivery and struggling with it. They are also using these AI Ops kinds of products, which are watching things in production, and trying to figure out what’s wrong. What we are doing is we’re bringing the two together and ensuring nothing goes wrong,” Bansal explained.

24×7 Service Guard Console. Screenshot: Harness

He says that he brought this product to market because he saw enterprise companies struggling with CI/CD. He said the early messaging that you should move fast and break things really doesn’t work in enterprise settings. They need tooling that ensures that critical applications will keep running even with continuous builds (however you define that). “How do you enable developers so that they can move fast and make sure the business doesn’t get impacted. I feel that industry was underserved by this [earlier] message,” he said.

While it’s hard for any product to absolutely guarantee up-time, this one is providing tooling for companies who see the value of CI/CD, but are looking for a way to keep their applications up and running, so they aren’t constantly on this deploy/repair treadmill. If it works as described, it could help advance CI/CD, especially for large companies that need to learn to move faster and want assurances that when things break, they can be fixed in an automated fashion.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com