Jun
06
2018
--

Four years after its release, Kubernetes has come a long way

On June 6th, 2014 Kubernetes was released for the first time. At the time, nobody could have predicted that 4 years later that the project would become a de facto standard for container orchestration or that the biggest tech companies in the world would be backing it. That would come later.

If you think back to June 2014, containerization was just beginning to take off thanks to Docker, which was popularizing the concept with developers, but being so early there was no standard way to manage those containers.

Google had been using containers as a way to deliver applications for years and ran a tool called Borg to handle orchestration. It’s called an orchestrator because much like a conductor of an orchestra, it decides when a container is launched and when it shuts down once it’s completed its job.

At the time, two Google engineers, Craig McLuckie and Joe Beda, who would later go on to start Heptio, were looking at developing an orchestration tool like Borg for companies that might not have the depth of engineering talent of Google to make it work. They wanted to spread this idea of how they develop distributed applications to other developers.

Hello world

Before that first version hit the streets, what would become Kubernetes developed out of a need for an orchestration layer that Beda and McLuckie had been considering for a long time. They were both involved in bringing Google Compute Engine, Google’s Infrastructure as a Service offering, to market, but they felt like there was something missing in the tooling that would fill in the gaps between infrastructure and platform service offerings.

“We had long thought about trying to find a way to bring a sort of a more progressive orchestrated way of running applications in production. Just based on our own experiences with Google Compute Engine, we got to see firsthand some of the challenges that the enterprise faced in moving workloads to the cloud,” McLuckie explained.

He said that they also understood some of the limitations associated with virtual machine-based workloads and they were thinking about tooling to help with all of that. “And so we came up the idea to start a new project, which ultimately became Kubernetes.”

Let’s open source it

When Google began developing Kubernetes in March 2014, it wanted nothing less than to bring container orchestration to the masses. It was a big goal and McLuckie, Beda and teammate Brendan Burns believed the only way to get there was to open source the technology and build a community around it. As it turns out, they were spot on with that assessment, but couldn’t have been 100 percent certain at the time. Nobody could have.

Photo: Cloud Native Computing Foudation

“If you look at the history, we made the decision to open source Kubernetes and make it a community-oriented project much sooner than conventional wisdom would dictate and focus on really building a community in an open and engaged fashion. And that really paid dividends as Kubernetes has accelerated and effectively become the standard for container orchestration,” McLuckie said.

The next thing they did was to create the Cloud Native Computing Foundation (CNCF) as an umbrella organization for the project. If you think about it, this project could have gone in several directions, as current CNCF director Dan Kohn described in a recent interview.

Going cloud native

Kohn said Kubernetes was unique in a couple of ways. First of all, it was based on existing technology developed over many years at Google. “Even though Kubernetes code was new, the concepts and engineering and know-how behind it was based on 15 years at Google building Borg (And a Borg replacement called Omega that failed),” Kohn said. The other thing was that Kubernetes was designed from the beginning to be open sourced.

Photo: Swapnil Bhartiya on Flickr. Used under CC by SA 2.0 license

He pointed out that Google could have gone in a few directions with Kubernetes. It could have created a commercial product and sold it through Google Cloud. It could have open sourced it, but had a strong central lead as they did with Go. They could have gone to the Linux Foundation and said they wanted to create a stand-alone Kubernetes Foundation. But they didn’t do any of these things.

McLuckie says they decided to something entirely different and place it under the auspices of the Linux Foundation, but not as Kubernetes project. Instead they wanted to create a new framework for cloud native computing itself and the CNCF was born. “The CNCF is a really important staging ground, not just for Kubernetes, but for the technologies that needed to come together to really complete the narrative, to make Kubernetes a much more comprehensive framework,” McLuckie explained.

Getting everyone going in the same direction

Over the last few years, we have watched as Kubernetes has grown into a container orchestration standard. Last summer in quick succession  a slew of major enterprise players joined CNCF as AWSOracleMicrosoftVMware and Pivotal all joined. They came together with Red Hat, Intel, IBM Cisco and others who were already members.

Cloud Native Computing Foundation Platinum members

Each these players no doubt wanted to control the orchestration layer, but they saw Kubernetes gaining momentum so rapidly, they had little choice but to go along. Kohn jokes that having all these big name players on board is like herding cats, but bringing in them in has been the goal all along. He said it just happened much faster than he thought it would.

In a recent interview with TechCrunch, David Aronchick, who runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days. He is shocked by how quickly it has grown. “I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he told TechCrunch.

As it has grown, it has become readily apparent that McLuckie was right about building that cloud native framework instead of a stand-alone Kubernetes foundation. Today there are dozens of adjacent projects and the organization is thriving.

Nobody is more blown away by this than McLuckie himself who says seeing Kubernetes hit these various milestones since its initial release has been amazing for him and his team to watch. “It’s just been a series of these wonderful kind of moments as Kubernetes has gained a head of steam, and it’s been  so much fun to see the community really rally around it.”

Jun
06
2018
--

SAP latest enterprise player to offer cloud blockchain service

SAP announced today at its Sapphire customer conference it was making the SAP Leonardo Blockchain service generally available. The latter is a cloud service to help companies build applications based on digital ledger-style technology.

Gil Perez, senior vice president for product and innovation and head of digital customer initiatives at SAP, says most of the customers he talks to are still very early in the proof of concept stage, but not so early that SAP doesn’t want to provide a service to help move them along the maturity curve.

“We are announcing the general availability of the SAP Cloud Platform Blockchain Services.” This is a generalized service on top of which customers can begin building their blockchain projects. He says SAP is taking an agnostic approach to the underlying ledger technology whether it’s the open source Hyperledger project, where SAP is a platinum sponsor, MultiChain or any additional blockchain or decentralized distributed ledger technologies.

Perez said part of the reason for this flexibility is that blockchain technology is really still being defined and SAP doesn’t want to commit to any underlying ledger approach until the market decides which way to go. He says this should allow them to minimize the impact on customers as the technology evolves.

They join other enterprise companies like Oracle, IBM, Microsoft and Amazon who have previously released blockchains services for their customers. For SAP, which many companies use for the back-office management of everything from finance to logistics, the blockchain could present some interesting use cases for its customers such as supply chain management.

In this case, the blockchain could help reduce paperwork, bring products to market more quickly and provide an easy audit trail. Instead of requesting a scanned copy of a signed document, you could simply click on a node on the blockchain and see the approval (or denial) and follow the products through the shipping process to the marketplace.

But Perez stresses that just because it’s early doesn’t mean they aren’t working on some pretty substantial projects. He cited one with a pharmaceutical company to ensure the provenance of drugs that involved over a billion transactions already.

SAP is simply trying to keep up with what customers want. Prior to the GA announced today, the company conducted a survey of 250 customers and found, that although it was early days, there is enterprise interest in exploring blockchain technology. Whether this initiative can expand into a broader business is hard to say, but SAP sees blockchain as logical adjacent technology to their core offerings.

Jun
05
2018
--

SAP gives CRM another shot with with new cloud-based suite

Customer Relationship Management (CRM) is a mature market with a clear market leader in Salesforce. It has a bunch other enterprise players like Microsoft, Oracle and SAP vying for position. SAP decided to take another shot today when it released a new business products suite called SAP C/4HANA. (Ya, catchy I know.)

SAP C/4HANA pulls together several acquisitions from the last several years. It started in 2013 when it bought Hybris for around a billion dollars. That gave them a logistics tracking piece. Then last year it got Gigya for $350 million, giving them a way to track customer identity. This year it bought the final piece when it paid $2.4 billion for CallidusCloud for a configure, price quote (CPQ) piece.

SAP has taken these three pieces and packaged them together into a customer relationship management package. They see this term much more broadly than simply tracking a database of names and vital information on customers. They hope with these products to give their customers a way to provide consumer data protection, marketing, commerce, sales and customer service.

They see this approach as different, but it’s really more of what the other players are doing by packaging sales, service and marketing into a single platform. “The legacy CRM systems are all about sales; SAP C/4HANA is all about the consumer. We recognize that every part of a business needs to be focused on a single view of the consumer. When you connect all SAP applications together in an intelligent cloud suite, the demand chain directly fuels the behaviors of the supply chain,” CEO Bill McDermott said in a statement.

It’s interesting that McDermott goes after legacy CRM tools because his company has offered its share of them over the years, but its market share has been headed in the wrong direction. This new cloud-based package is designed to change that. If you can’t build it, you can buy it, and that’s what SAP has done here.

Brent Leary, owner at CRM Essentials, who has been watching this market for many years says that while SAP has a big back-office customer base in ERP, it’s going to be tough to pull customers back to SAP as a CRM provider. “I think their huge base of ERP customers provides them with an opportunity to begin making inroads, but it will be tough as mindshare for CRM/Customer Engagement has moved away from SAP,” he told TechCrunch.

He says that it will be important with this new product to find its niche in a defined market. “It will be imperative going forward for SAP find spots to “own” in the minds of corporate buyers in order to optimize their chances of success against their main competitors,” he said.

It’s obviously not going to be easy, but SAP has used its cash to buy some companies and give it another shot. Time will tell if it was money well spent.

Jun
04
2018
--

Egnyte releases one-step GDPR compliance solution

Egnyte has always had the goal of protecting data and files wherever they live, whether on-premises or in the cloud. Today, the company announced a new feature to help customers comply with GDPR privacy regulations that went into effect in Europe last week in a straight-forward fashion.

You can start by simply telling Egnyte that you want to turn on “Identify sensitive content.” You then select which sets of rules you want to check for compliance including GDPR. Once you do this, the system goes and scans all of your repositories to find content deemed sensitive under GDPR rules (or whichever other rules you have selected).

Photo: Egnyte

It then gives you a list of files and marks them with a risk factor from 1-9 with one being the lowest level of risk and 9 being the highest. You can configure the program to expose whichever files you wish based on your own level of compliance tolerance. So for instance, you could ask to see any files with a risk level of seven or higher.

“In essence, it’s a data security and governance solution for unstructured data, and we are approaching that at the repository levels. The goal is to provide visibility, control and protection of that information in any in any unstructured repository,” Jeff Sizemore, VP of governance for Egnyte Protect told TechCrunch.

Photo: Egnyte

Sizemore says that Egnyte weighs the sensitivity of the data against the danger it could be exposed and leave a customer in violation of GDPR rules. “We look at things like public links into groups, which is basically just governance of the data, making sure nothing is wide open from a file share perspective. We also look at how the information is being shared,” Sizemore said. A social security number being shared internally is a lot less risky than a thousand social security numbers being shared in a public link.

The service covers 28 nations and 24 languages and it’s pre-configured to understand what data is considered sensitive by country and language. “We already have all the mapping and all the languages sitting underneath these policies. We are literally going into the data and actually scanning through and looking for GDPR-relevant data that’s in the scope of Article 40.”

The new service is generally available on Tuesday morning. The company will be makign an announcement at the InfoSecurity Conference in London. It has had the service in Beta prior to this.

Jun
04
2018
--

Microsoft Azure will soon offer machines with up to 12 TB of memory

Do you have an application that needs a lot of memory? Maybe as much as 12 terabytes of memory? Well, you’re in luck because Microsoft Azure will soon offer virtual machines with just that much RAM, based on Intel’s Xeon Scalable servers.

The company made this announcement in concert with the launch of a number of other virtual machine (VM) types that are specifically geared toward running high-memory workloads — and the standard use cases for this is running the SAP Hana in-memory database service.

So in addition to this massive new 12 TB VM, Microsoft is also launching a new 192 GB machine that extends the lower end of Hana-optimized machines on Azure, as well as a number other Hana options that scale across multiple VMs and can offer combined memory sizes of up to 18 TB.

Another new feature of Azure launching today is Standards SSDs. These will offer Azure users a new option for running entry-level production workloads that require consistent disk performance and throughput without the full price of what are now called “premium SSD.” The Standard SSDs won’t offer the same kind of performance, though, but Microsoft promises that developers will still get improved latency, reliability and scalability as compared to standard hard disks in its cloud.

Jun
04
2018
--

Microsoft promises to keep GitHub independent and open

Microsoft today announced its plans to acquire GitHub for $7.5 billion in stock. Unsurprisingly, that sent a few shock waves through the developer community, which still often eyes Microsoft with considerable unease. During a conference call this morning, Microsoft CEO Satya Nadella, incoming GitHub CEO (and Xamarin founder) Nat Friedman and GitHub co-founder and outgoing CEO Chris Wanstrath laid out the plans for GitHub’s future under Microsoft.

The core message everybody on today’s call stressed was that GitHub will continue to operate as an independent company. That’s very much the approach Microsoft took with its acquisition of LinkedIn, but to some degree, it’s also an admission that Microsoft is aware of its reputation among many of the developers who call GitHub their home. GitHub will remain an open platform that any developer can plug into and extend, Microsoft promises. It’ll support any cloud and any device.

Unsurprisingly, while the core of GitHub won’t change, Microsoft does plan to extend GitHub’s enterprise services and integrate them with its own sales and partner channels. And Nadella noted that the company will use GitHub to bring Microsoft’s developer tools and services “to new audiences.”

With Nat Friedman taking over as CEO, GitHub will have a respected technologist at the helm. Microsoft’s acquisition and integration of Xamarin has, at least from the outside, been a success (and Friedman himself always seems very happy about the outcome when I talk to him), so I think this bodes quite well for GitHub. After joining Microsoft, Friedman ran the developer services team at the company. Wanstrath, who only took over the CEO role again after its last CEO was ousted after harassment scandal at the company, had long said that he wanted to step down and take a more active product role. And that’s what’s happening now that Friedman is taking over. Wanstrath will become a technical fellow and work on “strategic software initiatives” at Microsoft.

Indeed, during an interview after the acquisition was announced, Friedman repeatedly noted that he thinks GitHub is the most important developer company today — and it turns out that he started advocating for a closer relationship between the two companies right after he joined Microsoft two years ago.

During today’s press call, Friedman also stressed Microsoft’s commitment to keeping GitHub as open as it is today — but he also plans to expand the service and its community. “We want to bring more developers and more capabilities to GitHub, he said. “Because as a network and as a group of people in a community, GitHub is stronger, the bigger it is.”

Friedman echoed that in our interview later in the day and noted that he expected the developer community to be skeptical of the mashup of these two companies. “There is always healthy skepticism in the developer community,” he told me. “I would ask developers to look at the last few years of Microsoft history and really honestly Microsoft’s transformation into an open source company.” He asked developers to judge Microsoft by that and noted that what really matters, of course, is that the company will follow through on the promises it made today.

As for the product itself, Friedman noted that everything GitHub does should be about making a developer’s life easier. And to get started, that’ll mean making developing in the cloud easier. “We think broadly about the new and compelling types of ways that we can integrate cloud services into GitHub,” he noted. “And this doesn’t just apply to our cloud. GitHub is an open platform. So we have the ability for anyone to plug their cloud services into GitHub, and make it easier for you to go from code to cloud. And it extends beyond the cloud as well. Code to cloud. code to mobile, code to edge device, code to IoT. Every workflow that a developer wants to pursue, we will support.”

Another area the company will work on is the GitHub Marketplace. Microsoft says that it will offer all of its developer tools and services in the GitHub Marketplace.

And unsurprisingly, VS Code, Microsoft’s free and open source code editor, will get deeply integrated GitHub support.

“Our vision is really all about empowering developers and creating a home where you can use any language, any operating system, any cloud, any device for every developer, whether your student, a hobbyist, a large company, a startup or anything in between. GitHub is the home for all developers,” said Friedman. In our interview, he also stressed that his focus will be on making “GitHub better at making GitHub” and that he plans to do so by bringing Microsoft’s resources and infrastructure to the code hosting service, while at the same time leaving it to operate independently. 

It’s unclear whether all of these commitments today will easy developers’ fears of losing GitHub as a relatively neutral third-party in the ecosystem.

Nadella, who is surely aware of this, addressed this directly today. “We recognize the responsibility we take on with this agreement,” he said. “We are committed to being stewards of the GitHub community, which will retain its developer-first ethos operate independently and remain an open platform. We will always listen to develop a feedback and invest in both fundamentals as well as new capability once the acquisition closes.

In his prepared remarks, Nadella also stressed Microsoft’s heritage as a developer-centric company and that is it already the most active organization on GitHub. But more importantly, he addressed Microsoft’s role in the open source community, too. “We have always loved developers, and we love open source developers,” he said. “We’ve been on a journey ourselves with open source and the open source community. Today, we are all in with open source. We are active in the open source ecosystem. We contribute to open source project and some of our most vibrant developer tools and frameworks are open-sourced when it comes to our commitment to all source judges, by the actions we have taken in the recent past our actions today and in the future.”

Jun
04
2018
--

How Yelp (mostly) shut down its own data centers and moved to AWS

Back in 2013, Yelp was a 9-year old company built on a set of internal systems. It was coming to the realization that running its own data centers might not be the most efficient way to run a business that was continuing to scale rapidly. At the same time, the company understood that the tech world had changed dramatically from 2004 when it launched and it needed to transform the underlying technology to a more modern approach.

That’s a lot to take on in one bite, but it wasn’t something that happened willy-nilly or overnight says Jason Fennell, SVP of engineering at Yelp . The vast majority of the company’s data was being processed in a massive Python repository that was getting bigger all the time. The conversation about shifting to a microservices architecture began in 2012.

The company was also running the massive Yelp application inside its own datacenters, and as it grew it was increasingly becoming limited by long lead times required to procure and get new hardware online. It saw this was an unsustainable situation over the long-term and began a process of transforming from running a huge monolithic application on-premises to one built on microservices running in the cloud. It was a quite a journey.

The data center conundrum

Fennell described the classic scenario of a company that could benefit from a shift to the cloud. Yelp had a small operations team dedicated to setting up new machines. When engineering anticipated a new resource requirement, they had to give the operations team sufficient lead time to order new servers and get them up and running, certainly not the most efficient way to deal with a resource problem, and one that would have been easily solved by the cloud.

“We kept running into a bottleneck, I was running a chunk of the search team [at the time] and I had to project capacity out to 6-9 months. Then it would take a few months to order machines and another few months to set them up,” Fennell explained. He emphasized that the team charged with getting these machines going was working hard, but there were too few people and too many demands and something had to give.

“We were on this cusp. We could have scaled up that team dramatically and gotten [better] at building data centers and buying servers and doing that really fast, but we were hearing a lot of AWS and the advantages there,” Fennell explained.

To the cloud!

They looked at the cloud market landscape in 2013 and AWS was the clear leader technologically. That meant moving some part of their operations to EC2. Unfortunately, that exposed a new problem: how to manage this new infrastructure in the cloud. This was before the notion of cloud-native computing even existed. There was no Kubernetes. Sure, Google was operating in a cloud-native fashion in-house, but it was not really an option for most companies without a huge team of engineers.

Yelp needed to explore new ways of managing operations in a hybrid cloud environment where some of the applications and data lived in the cloud and some lived in their data center. It was not an easy problem to solve in 2013 and Yelp had to be creative to make it work.

That meant remaining with one foot in the public cloud and the other in a private data center. One tool that helped ease the transition was AWS Direct Connect, which was released the prior year and enabled Yelp to directly connect from their data center to the cloud.

Laying the groundwork

About this time, as they were figuring out how AWS works, another revolutionary technological change was occurring when Docker emerged and began mainstreaming the notion of containerization. “That’s another thing that’s been revolutionary. We could suddenly decouple the context of the running program from the machine it’s running on. Docker gives you this container, and is much lighter weight than virtualization and running full operating systems on a machine,” Fennell explained.

Another thing that was happening was the emergence of the open source data center operating system called Mesos, which offered a way to treat the data center as a single pool of resources. They could apply this notion to wherever the data and applications lived. Mesos also offered a container orchestration tool called Marathon in the days before Kubernetes emerged as a popular way of dealing with this same issue.

“We liked Mesos as a resource allocation framework. It abstracted away the fleet of machines. Mesos abstracts many machines and controls programs across them. Marathon holds guarantees about what containers are running where. We could stitch it all together into this clear opinionated interface,” he said.

Pulling it all together

While all this was happening, Yelp began exploring how to move to the cloud and use a Platform as a Service approach to the software layer. The problem was at the time they started, there wasn’t really any viable way to do this. In the buy versus build decision making that goes on in large transformations like this one, they felt they had little choice but to build that platform layer themselves.

In late 2013 they began to pull together the idea of building this platform on top of Mesos and Docker, giving it the name PaaSTA, an internal joke that stood for Platform as a Service, Totally Awesome. It became simply known as Pasta.

Photo: David Silverman/Getty Images

The project had the ambitious goal of making their infrastructure work as a single fabric, in a cloud-native fashion before most anyone outside of Google was using that term. Pasta developed slowly with the first developer piece coming online in August 2014 and the first  production service later that year in December. The company actually open sourced the technology the following year.

“Pasta gave us the interface between the applications and development teams. Operations had to make sure Pasta is up and running, while Development was responsible for implementing containers that implemented the interface,” Fennell said.

Moving to deeper into the public cloud

While Yelp was busy building these internal systems, AWS wasn’t sitting still. It was also improving its offerings with new instance types, new functionality and better APIs and tooling. Fennell reports this helped immensely as Yelp began a more complete move to the cloud.

He says there were a couple of tipping points as they moved more and more of the application to AWS — including eventually, the master database. This all happened in more recent years as they understood better how to use Pasta to control the processes wherever they lived. What’s more, he said that adoption of other AWS services was now possible due to tighter integration between the in-house data centers and AWS.

Photo: erhui1979/Getty Images

The first tipping point came around 2016 as all new services were configured for the cloud. He said they began to get much better at managing applications and infrastructure in AWS and their thinking shifted from how to migrate to AWS to how to operate and manage it.

Perhaps the biggest step in this years-long transformation came last summer when Yelp moved its master database from its own data center to AWS. “This was the last thing we needed to move over. Otherwise it’s clean up. As of 2018, we are serving zero production traffic through physical data centers,” he said. While they still have two data centers, they are getting to the point, they have the minimum hardware required to run the network backbone.

Fennell said they went from two weeks to a month to get a service up and running before this was all in place to just a couple of minutes. He says any loss of control by moving to the cloud has been easily offset by the convenience of using cloud infrastructure. “We get to focus on the things where we add value,” he said — and that’s the goal of every company.

Jun
01
2018
--

Box acquires Progressly to expand workflow options

Box announced today that it has purchased Progressly, a Redwood City startup that focuses on workflow. All 12 Progressly employees will be joining Box immediately. They did not disclose the purchase price.

If you follow Box, you probably know the company announced a workflow tool in 2016 called Box Relay along with a partnership with IBM to sell it inside large enterprises. Jeetu Patel, chief product officer at Box says Relay is great for well defined processes inside a company like contract management or employee on-boarding, but Box wanted to expand on that initial vision to build additional types of workflows. The Progressly team will help them do that.

Patel said that the company has heard from customers, especially in larger, more complex organizations, that they need a similar level of innovation on the automation side that they’ve been getting on the content side from Box.

“One of the things that we’ve done is to continue investing in partnerships around workflow with third parties. We have actually gone out and built a product with Relay. But we wanted to continue to make sure that we have an enhancement to our internal automation engine within Box itself. And so we just made an acquisition of a company called Progressly,” Patel told TechCrunch.

That should allow Box to build workflows that not only run within Box, but ones that can integrate and intersect with external workflow engines like Pega and Nintex to build more complex automation in conjunction with the Box set of tools and services. This could involve both internal employees and external organizations and moving content through a much more sophisticated workflow than Box Relay provides.

“What we wanted to do is just make sure that we double down in the investment in workflow, given the level of appetite we’ve seen from the market for someone like Box providing a solution like this,” Patel explained.

By buying Progressly, they were able to acquihire a set of employees who have a focussed understanding of workflow and can help continue to build out that automation engine and incorporate it into the Box platform. Patel says how they could monetize all of this is still open to discussion. For now, the Progressly team is already in the fold and product announcements based on this acquisition could be coming out later this year.

Progressly was founded in 2014 and was headquarted right down the street from Box in Redwood City. The company has raised $6 million, according to data on Crunchbase.

May
31
2018
--

AWS launches pay-per-session pricing for its QuickSight BI tool

Amazon QuickSight, the company’s business intelligence tool for AWS, launched back in 2015, but it’s hard to say how much impact the service has made in the highly competitive BI market. The company has far from given up on this project, though, and today, it’s introducing a new pay-per-session pricing plan for access to QuickSight dashboards that is surely meant to give it a bit of a lift in a market where Tableau and Microsoft’s Power BI have captured much of the mindshare.

Under the new pricing plan, creating and publishing dashboards will stay cost $18 per user and month. For readers, though, who only need to have access to these dashboards, AWS now offers a very simple option: they will now pay $0.30 per session up to a maximum of $5 per month and user. Under this scheme, a session is defined as the first 30 minutes from login.

Previously, AWS offered two tiers of QuickSight plans: a $9 per user/month standard plan and a $24/user/month enterprise edition with support for Active Directory and encryption at rest.

That $9/user/month is still available and probably still makes sense for smaller companies where those who build dashboards and consume them are often the same person. The new pricing plan replaces the existing enterprise edition.

QuickSight already significantly undercuts the pricing of services like Tableau and others, though we’re also talking about a somewhat more limited feature set. This new pay-per-session offering only widens the pricing gap.

“With highly scalable object storage in Amazon Simple Storage Service (Amazon S3), data warehousing at one-tenth the cost of traditional solutions in Amazon Redshift, and serverless analytics offered by Amazon Athena, customers are moving data into AWS at an unprecedented pace,” said Dorothy Nicholls, Vice President of Amazon QuickSight at AWS, in a canned comment. “What’s changed is that virtually all knowledge workers want easy access to that data and the insights that can be derived. It’s been cost-prohibitive to enable that access for entire companies until the Amazon QuickSight pay-per-session pricing — this is a game-changer in terms of information and analytics access.”

Current QuickSight users include the NFL, Siemens, Volvo and AutoTrader.

May
30
2018
--

Salesforce keeps revenue pedal to the metal with another mammoth quarter

Salesforce just keeps on growing revenue. In another remarkable quarter, the company announced 3.01 billion in revenue for Q1 2019 with no signs of slowing down. That puts the CRM giant on a run rate of over $12 billion with the company’s most optimistic projections suggesting it could go even higher. It’s also the first time they have surpassed $3 billion in revenue for a quarter.

As you might expect Salesforce chairman and CEO Marc Benioff was over the moon about the results in the earnings call with analyst yesterday afternoon. “Revenue for the quarter rose to more than $3 billion, up 25%, putting us on $12 billion revenue run rate that was just amazing. And we now have $20.4 billion of future revenues under contract, which is the remaining transaction price, that’s up 36% from a year ago. Based on these strong results, we’re raising our full year top line revenue guidance to $13.125 billion at the high end of our range, 25% growth for this year,” Benioff told analysts.

Brent Leary, an analyst who has been watching the CRM industry for many years, says CRM in general is a hot area and Salesforce has been able to take advantage. “With CRM becoming the biggest and fastest growing business software category last year according to Gartner, it’s easy to see with these number that Salesforce is leading the way forward. And they are in position to keep moving themselves and the category forward for years to come as their acquisitions should continue to pay off for them,” Leary told TechCrunch.

Bringing Mulesoft into the fold

Further Benioff rightly boasted that the company would be the fastest software company ever to $13 billion and it continued on the road towards its previously stated $20 billion goal. The $6.5 billion acquisition of Mulesoft earlier this year should help fuel that growth. “And this month, we closed our acquisition of MuleSoft, giving us the industry’s leading integration platform as well. Well, integration has never been more strategic,” Benioff stated.

Salesforce CEO Marc Benioff Photo: TechCrunch

Bret Taylor, the company’s president and chief product officer, says the integration really ties in nicely with another of the company’s strategic initiatives, artificial intelligence, which they call Einstein. “[Customers] know that their AI is only as powerful as data it has access to. And so when you think of MuleSoft, think unlocking data. The data is trapped in all these isolated systems on-premises, private cloud, public cloud, and MuleSoft, they can unlock this data and make it available to Einstein and make a smarter customer facing system,” Taylor explained.

Leary thinks there’s one other reason the company has done so well, one that’s hard to quantify in pure dollars, but perhaps an approach other companies should be paying attention to. “One of the more undercovered aspects of what Salesforce is doing is how their social responsibility and corporate culture is attracting a lot of positive attention,” he said. “That may be hard to boil down into revenue and profit numbers, but it has to be part of the reason why Salesforce continues to grow at the pace they have,” he added.

Keep on rolling

All of this has been adding up to incredible numbers. It’s easy to take revenue like this for granted because the company has been on such a sustained growth rate for such a long period of time, but just becoming a billion dollar company has been a challenge for most Software as a Service providers up until now. A $13 billion run rate is in an entirely different stratosphere and it could be lifting the entire category says Jason Lemkin, founder at SaasStr, a firm that invests in SaaS startups.

“SaaS companies crossing $1B in ARR will soon become commonplace, as shocking as that might have sounded in say 2011. Atlassian, Box, Hubspot, and Zendesk are all well on their way there. The best SaaS companies are growing faster after $100m ARR, which is propelling them there,” Lemkin explained.

Salesforce is leading the way. Perhaps that’s because it has the same first-to-market advantage that Amazon has had in the cloud infrastructure market. It has gained such substantial momentum by being early, starting way back in 1999 before Software as a Service was seen as a viable business. In fact, Benioff told a story earlier this year that when he first started, he did the rounds of the venture capital firms in Silicon Valley and every single one turned him down.

You can bet that those companies have some deep regrets now, as the company’s revenue and stock price continues to soar.  As of publication this morning, the stock was sitting at $130.90, up over 3 percent. All this company does is consistently make money, and that’s pretty much all you can ask from any organization. As Leary aptly put it, “Yea, they’re really killing it.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com