Sep
09
2019
--

With its Kubernetes bet paying off, Cloud Foundry doubles down on developer experience

More than 50% of the Fortune 500 companies are now using the open-source Cloud Foundry Platform-as-a-Service project — either directly or through vendors like Pivotal — to build, test and deploy their applications. Like so many other projects, including the likes of OpenStack, Cloud Foundry went through a bit of a transition in recent years as more and more developers started looking to containers — and especially the Kubernetes project — as a platform on which to develop. Now, however, the project is ready to focus on what always differentiated it from its closed- and open-source competitors: the developer experience.

Long before Docker popularized containers for application deployment, though, Cloud Foundry had already bet on containers and written its own orchestration service, for example. With all of the momentum behind Kubernetes, though, it’s no surprise that many in the Cloud Foundry started to look at this new project to replace the existing container technology.

Sep
05
2019
--

Atlassian launches free tiers for all its cloud products, extends premium pricing plan

At our TC Sessions: Enterprise event, Atlassian co-CEO Scott Farquhar today announced a number of updates to how the company will sell its cloud-based services. These include the launch of new premium plans for more of its products, as well as the addition of a free tier for all of the company’s services that didn’t already offer one. Atlassian now also offers discounted cloud pricing for academic institutions and nonprofit organizations.

The company previously announced its premium plans for Jira Software Cloud and Confluence Cloud. Now, it is adding Jira Service Desk to this lineup, and chances are it’ll add more of its services over time. The premium plan adds a 99.9% update SLA, unlimited storage and additional support. Until now, Atlassian sold these products solely based on the number of users, but didn’t offer a specific enterprise plan.

As Harsh Jawharkar, the head of go-to-market for Cloud Platform at Atlassian, told me, many of its larger customers, who often ran the company’s products on their own servers before, are now looking to move to the cloud and hand over to Atlassian the day-to-day operations of these services. That’s in part because they are more comfortable with the idea of moving to the cloud at this point — and because Atlassian probably knows how to run its own services better than anybody else. 

For these companies, Atlassian is also introducing a number of new features today. Those include soon-to-launch data residency controls for companies that need to ensure that their data stays in a certain geographic region, as well as the ability to run Jira and Confluence Cloud behind customized URLs that align with a company’s brand, which will launch in early access in 2020. Maybe more important, though, are features to Atlassian Access, the company’s command center that helps enterprises manage its cloud products. Access now supports single sign-on with Google Cloud Identity and Microsoft Active Directory Federation Services, for example. The company is also partnering with McAfee and Bitglass to offer additional advanced security features and launch a cross-product audit log. Enterprise admins will also soon get access to a new dashboard that will help them understand how Atlassian’s tools are being used across the organization.

But that’s not all. The company is also launching new tools to make customer migration to its cloud products easier, with initial support for Confluence and Jira support coming later this year. There’s also new extended cloud trial licenses, which a lot of customers have asked for, Jawharkar told me, because the relatively short trial periods the company previously offered weren’t quite long enough for companies to fully understand their needs.

This is a big slew of updates for Atlassian — maybe its biggest enterprise-centric release since the company’s launch. It has clearly reached a point where it had to start offering these enterprise features if it wanted to grow its market and bring more of these large companies on board. In its early days, Atlassian mostly grew by selling directly to teams within a company. These days, it has to focus a bit more on selling to executives as it tries to bring more enterprises on board — and those companies have very specific needs that the company didn’t have to address before. Today’s launches clearly show that it is now doing so — at least for its cloud-based products.

The company isn’t forgetting about other users either, though. It’ll still offer entry-level plans for smaller teams and it’s now adding free tiers to products like Jira Software, Confluence, Jira Service Desk and Jira Core. They’ll join Trello, Bitbucket and Opsgenie, which already feature free versions. Going forward, academic institutions will receive 50% off their cloud subscriptions and nonprofits will receive 75% off.

It’s obvious that Atlassian is putting a lot of emphasis on its cloud services. It’s not doing away with its self-hosted products anytime, but its focus is clearly elsewhere. The company itself started this process a few years ago and a lot of this work is now coming to fruition. As Anu Bharadwaj, the head of Cloud Platform at Atlassian, told me, this move to a fully cloud-native stack enabled many of today’s announcements, and she expects that it’ll bring a lot of new customers to its cloud-based services.  

Aug
26
2019
--

Why now is the time to get ready for quantum computing

For the longest time, even while scientists were working to make it a reality, quantum computing seemed like science fiction. It’s hard enough to make any sense out of quantum physics to begin with, let alone the practical applications of this less than intuitive theory. But we’ve now arrived at a point where companies like D-Wave, Rigetti, IBM and others actually produce real quantum computers.

They are still in their infancy and nowhere near as powerful as necessary to compute anything but very basic programs, simply because they can’t run long enough before the quantum states decohere, but virtually all experts say that these are solvable problems and that now is the time to prepare for the advent of quantum computing. Indeed, Gartner just launched a Quantum Volume metric, based on IBM’s research, that looks to help CIOs prepare for the impact of quantum computing.

To discuss the state of the industry and why now is the time to get ready, I sat down with IBM’s Jay Gambetta, who will also join us for a panel on Quantum Computing at our TC Sessions: Enterprise event in San Francisco on September 5, together with Microsoft’s Krysta Svore and Intel’s Jim Clark.

Aug
26
2019
--

Nvidia and VMware team up to make GPU virtualization easier

Nvidia today announced that it has been working with VMware to bring its virtual GPU technology (vGPU) to VMware’s vSphere and VMware Cloud on AWS. The company’s core vGPU technology isn’t new, but it now supports server virtualization to enable enterprises to run their hardware-accelerated AI and data science workloads in environments like VMware’s vSphere, using its new vComputeServer technology.

Traditionally (as far as that’s a thing in AI training), GPU-accelerated workloads tend to run on bare metal servers, which were typically managed separately from the rest of a company’s servers.

“With vComputeServer, IT admins can better streamline management of GPU accelerated virtualized servers while retaining existing workflows and lowering overall operational costs,” Nvidia explains in today’s announcement. This also means that businesses will reap the cost benefits of GPU sharing and aggregation, thanks to the improved utilization this technology promises.

Note that vComputeServer works with VMware Sphere, vCenter and vMotion, as well as VMware Cloud. Indeed, the two companies are using the same vComputeServer technology to also bring accelerated GPU services to VMware Cloud on AWS. This allows enterprises to take their containerized applications and from their own data center to the cloud as needed — and then hook into AWS’s other cloud-based technologies.

2019 08 25 1849

“From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Nvidia founder and CEO Jensen Huang . “Together with VMware, we’re designing the most advanced and highest performing GPU- accelerated hybrid cloud infrastructure to foster innovation across the enterprise.”

Aug
22
2019
--

Enterprise software is hot — who would have thought?

Once considered the most boring of topics, enterprise software is now getting infused with such energy that it is arguably the hottest space in tech.

It’s been a long time coming. And it is the developers, software engineers and veteran technologists with deep experience building at-scale technologies who are energizing enterprise software. They have learned to build resilient and secure applications with open-source components through continuous delivery practices that align technical requirements with customer needs. And now they are developing application architectures and tools for at-scale development and management for enterprises to make the same transformation.

“Enterprise had become a dirty word, but there’s a resurgence going on and Enterprise doesn’t just mean big and slow anymore,” said JD Trask, co-founder of Raygun enterprise monitoring software. “I view the modern enterprise as one that expects their software to be as good as consumer software. Fast. Easy to use. Delivers value.”

The shift to scale out computing and the rise of the container ecosystem, driven largely by startups, is disrupting the entire stack, notes Andrew Randall, vice president of business development at Kinvolk.

In advance of TechCrunch’s first enterprise-focused event, TC Sessions: Enterprise, The New Stack examined the commonalities between the numerous enterprise-focused companies who sponsor us. Their experiences help illustrate the forces at play behind the creation of the modern enterprise tech stack. In every case, the founders and CTOs recognize the need for speed and agility, with the ultimate goal of producing software that’s uniquely in line with customer needs.

We’ll explore these topics in more depth at The New Stack pancake breakfast and podcast recording at TC Sessions: Enterprise. Starting at 7:45 a.m. on Sept. 5, we’ll be serving breakfast and hosting a panel discussion on “The People and Technology You Need to Build a Modern Enterprise,” with Sid Sijbrandij, founder and CEO, GitLab, and Frederic Lardinois, enterprise writer and editor, TechCrunch, among others. Questions from the audience are encouraged and rewarded, with a raffle prize awarded at the end.

Traditional virtual machine infrastructure was originally designed to help manage server sprawl for systems-of-record software — not to scale out across a fabric of distributed nodes. The disruptors transforming the historical technology stack view the application, not the hardware, as the main focus of attention. Companies in The New Stack’s sponsor network provide examples of the shift toward software that they aim to inspire in their enterprise customers. Portworx provides persistent state for containers; NS1 offers a DNS platform that orchestrates the delivery internet and enterprise applications; Lightbend combines the scalability and resilience of microservices architecture with the real-time value of streaming data.

“Application development and delivery have changed. Organizations across all industry verticals are looking to leverage new technologies, vendors and topologies in search of better performance, reliability and time to market,” said Kris Beevers, CEO of NS1. “For many, this means embracing the benefits of agile development in multicloud environments or building edge networks to drive maximum velocity.”

Enterprise software startups are delivering that value, while they embody the practices that help them deliver it.

The secrets to speed, agility and customer focus

Speed matters, but only if the end result aligns with customer needs. Faster time to market is often cited as the main driver behind digital transformation in the enterprise. But speed must also be matched by agility and the ability to adapt to customer needs. That means embracing continuous delivery, which Martin Fowler describes as the process that allows for the ability to put software into production at any time, with the workflows and the pipeline to support it.

Continuous delivery (CD) makes it possible to develop software that can adapt quickly, meet customer demands and provide a level of satisfaction with benefits that enhance the value of the business and the overall brand. CD has become a major category in cloud-native technologies, with companies such as CircleCI, CloudBees, Harness and Semaphore all finding their own ways to approach the problems enterprises face as they often struggle with the shift.

“The best-equipped enterprises are those [that] realize that the speed and quality of their software output are integral to their bottom line,” Rob Zuber, CTO of CircleCI, said.

Speed is also in large part why monitoring and observability have held their value and continue to be part of the larger dimension of at-scale application development, delivery and management. Better data collection and analysis, assisted by machine learning and artificial intelligence, allow companies to quickly troubleshoot and respond to customer needs with reduced downtime and tight DevOps feedback loops. Companies in our sponsor network that fit in this space include Raygun for error detection; Humio, which provides observability capabilities; InfluxData with its time-series data platform for monitoring; Epsagon, the monitoring platform for serverless architectures and Tricentis for software testing.

“Customer focus has always been a priority, but the ability to deliver an exceptional experience will now make or break a “modern enterprise,” said Wolfgang Platz, founder of Tricentis, which makes automated software testing tools. “It’s absolutely essential that you’re highly responsive to the user base, constantly engaging with them to add greater value. This close and constant collaboration has always been central to longevity, but now it’s a matter of survival.”

DevOps is a bit overplayed, but it still is the mainstay workflow for cloud-native technologies and critical to achieving engineering speed and agility in a decoupled, cloud-native architecture. However, DevOps is also undergoing its own transformation, buoyed by the increasing automation and transparency allowed through the rise of declarative infrastructure, microservices and serverless technologies. This is cloud-native DevOps. Not a tool or a new methodology, but an evolution of the longstanding practices that further align developers and operations teams — but now also expanding to include security teams (DevSecOps), business teams (BizDevOps) and networking (NetDevOps).

“We are in this constant feedback loop with our customers where, while helping them in their digital transformation journey, we learn a lot and we apply these learnings for our own digital transformation journey,” Francois Dechery, chief strategy officer and co-founder of CloudBees, said. “It includes finding the right balance between developer freedom and risk management. It requires the creation of what we call a continuous everything culture.”

Leveraging open-source components is also core in achieving speed for engineering. Open-source use allows engineering teams to focus on building code that creates or supports the core business value. Startups in this space include Tidelift and open-source security companies such as Capsule8. Organizations in our sponsor portfolio that play roles in the development of at-scale technologies include The Linux Foundation, the Cloud Native Computing Foundation and the Cloud Foundry Foundation.

“Modern enterprises … think critically about what they should be building themselves and what they should be sourcing from somewhere else,” said Chip Childers, CTO of Cloud Foundry Foundation . “Talented engineers are one of the most valuable assets a company can apply to being competitive, and ensuring they have the freedom to focus on differentiation is super important.”

You need great engineering talent, giving them the ability to build secure and reliable systems at scale while also the trust in providing direct access to hardware as a differentiator.

Is the enterprise really ready?

The bleeding edge can bleed too much for the likings of enterprise customers, said James Ford, an analyst and consultant.

“It’s tempting to live by mantras like ‘wow the customer,’ ‘never do what customers want (instead build innovative solutions that solve their need),’ ‘reduce to the max,’ … and many more,” said Bernd Greifeneder, CTO and co-founder of Dynatrace . “But at the end of the day, the point is that technology is here to help with smart answers … so it’s important to marry technical expertise with enterprise customer need, and vice versa.”

How the enterprise adopts new ways of working will affect how startups ultimately fare. The container hype has cooled a bit and technologists have more solid viewpoints about how to build out architecture.

One notable trend to watch: The role of cloud services through projects such as Firecracker. AWS Lambda is built on Firecracker, the open-source virtualization technology, built originally at Amazon Web Services . Firecracker serves as a way to get the speed and density that comes with containers and the hardware isolation and security capabilities that virtualization offers. Startups such as Weaveworks have developed a platform on Firecracker. OpenStack’s Kata containers also use Firecracker.

“Firecracker makes it easier for the enterprise to have secure code,” Ford said. It reduces the surface security issues. “With its minimal footprint, the user has control. It means less features that are misconfigured, which is a major security vulnerability.”

Enterprise startups are hot. How they succeed will determine how well they may provide a uniqueness in the face of the ever-consuming cloud services and at-scale startups that inevitably launch their own services. The answer may be in the middle with purpose-built architectures that use open-source components such as Firecracker to provide the capabilities of containers and the hardware isolation that comes with virtualization.

Hope to see you at TC Sessions: Enterprise. Get there early. We’ll be serving pancakes to start the day. As we like to say, “Come have a short stack with The New Stack!”

Aug
22
2019
--

NASA’s new HPE-built supercomputer will prepare for landing Artemis astronauts on the Moon

NASA and Hewlett Packard Enterprise (HPE) have teamed up to build a new supercomputer, which will serve NASA’s Ames Research Center in California and develop models and simulations of the landing process for Artemis Moon missions.

The new supercomputer is called “Aitken,” named after American astronomer Robert Grant Aitken, and it can run simulations at up to 3.69 petaFLOPs of theoretical performance power. Aitken is custom-designed by HPE and NASA to work with the Ames modular data center, which is a project it undertook starting in 2017 to massively reduce the amount of water and energy used in cooling its supercomputing hardware.

Aitken employs second-generation Intel Xeon processors, Mellanox InfiniBand high-speed networking, and has 221 TB of memory on board for storage. It’s the result of four years of collaboration between NASA and HPE, and it will model different methods of entry, descent and landing for Moon-destined Artemis spacecraft, running simulations to determine possible outcomes and help determine the best, safest approach.

This isn’t the only collaboration between HPE and NASA: The enterprise computer maker built for the agency a new kind of supercomputer able to withstand the rigors of space, and sent it up to the ISS in 2017 for preparatory testing ahead of potential use on longer missions, including Mars. The two partners then opened that supercomputer for use in third-party experiments last year.

HPE also announced earlier this year that it was buying supercomputer company Cray for $1.3 billion. Cray is another long-time partner of NASA’s supercomputing efforts, dating back to the space agency’s establishment of a dedicated computational modeling division and the establishing of its Central Computing Facility at Ames Research Center.

Aug
21
2019
--

Splunk acquires cloud monitoring service SignalFx for $1.05B

Splunk, the publicly traded data processing and analytics company, today announced that it has acquired SignalFx for a total price of about $1.05 billion. Approximately 60% of this will be in cash and 40% in Splunk common stock. The companies expect the acquisition to close in the second half of 2020.

SignalFx, which emerged from stealth in 2015, provides real-time cloud monitoring solutions, predictive analytics and more. Upon close, Splunk argues, this acquisition will allow it to become a leader “in observability and APM for organizations at every stage of their cloud journey, from cloud-native apps to homegrown on-premises applications.”

Indeed, the acquisition will likely make Splunk a far stronger player in the cloud space as it expands its support for cloud-native applications and the modern infrastructures and architectures those rely on.

2019 08 21 1332

Ahead of the acquisition, SignalFx had raised a total of $178.5 million, according to Crunchbase, including a recent Series E round. Investors include General Catalyst, Tiger Global Management, Andreessen Horowitz and CRV. Its customers include the likes of AthenaHealth, Change.org, Kayak, NBCUniversal and Yelp.

“Data fuels the modern business, and the acquisition of SignalFx squarely puts Splunk in position as a leader in monitoring and observability at massive scale,” said Doug Merritt, president and CEO, Splunk, in today’s announcement. “SignalFx will support our continued commitment to giving customers one platform that can monitor the entire enterprise application lifecycle. We are also incredibly impressed by the SignalFx team and leadership, whose expertise and professionalism are a strong addition to the Splunk family.”

Aug
19
2019
--

The five great reasons to attend TechCrunch’s Enterprise show Sept. 5 in SF

The vast enterprise tech category is Silicon Valley’s richest, and today it’s poised to change faster than ever before. That’s probably the biggest reason to come to TechCrunch’s first-ever show focused entirely on enterprise. But here are five more reasons to commit to joining TechCrunch’s editors on September 5 at San Francisco’s Yerba Buena Center for an outstanding day (agenda here) addressing the tech tsunami sweeping through enterprise. 

No. 1: Artificial intelligence
At once the most consequential and most hyped technology, no one doubts that AI will change business software and increase productivity like few, if any, technologies before it. To peek ahead into that future, TechCrunch will interview Andrew Ng, arguably the world’s most experienced AI practitioner at huge companies (Baidu, Google) as well as at startups. AI will be a theme across every session, but we’ll address it again head-on in a panel with investor Jocelyn Goldfein (Zetta), founder Bindu Reddy (Reality Engines) and executive John Ball (Salesforce / Einstein). 

No. 2: Data, the cloud and Kubernetes
If AI is at the dawn of tomorrow, cloud transformation is the high noon of today. Indeed, 90% of the world’s data was created in the past two years, and no enterprise can keep its data hoard on-prem forever. Azure’s CTO
Mark Russinovitch will discuss Microsft’s vision for the cloud. Leaders in the open-source Kubernetes revolution — Joe Beda (VMware), Aparna Sinha (Google) and others — will dig into what Kubernetes means to companies making the move to cloud. And last, there is the question of how to find signal in all the data — which will bring three visionary founders to the stage: Benoit Dageville (Snowflake), Ali Ghodsi (Databricks) and Murli Thirumale (Portworx). 

No. 3: Everything else on the main stage!
Let’s start with a fireside chat with
SAP CEO Bill McDermott and Qualtrics Chief Experience Officer Julie Larson-Green. We have top investors talking where they are making their bets, and security experts talking data and privacy. And then there is quantum computing, the technology revolution waiting on the other side of AI: Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort, Jim Clarke, the director of quantum hardware at Intel Labs and Krysta Svore, who leads Microsoft’s quantum effort.

All told, there are 21 programming sessions.

No. 4: Network and get your questions answered
There will be two Q&A breakout sessions with top enterprise investors; this is for founders (and anyone else) to query investors directly. Plus, TechCrunch’s unbeatable CrunchMatch app makes it really easy to set up meetings with the other attendees, an
incredible array of folks, plus the 20 early-stage startups exhibiting on the expo floor.

No. 5: SAP
Enterprise giant SAP is our sponsor for the show, and they are not only bringing a squad of top executives, they are producing four parallel track sessions, featuring key SAP Chief Innovation Officer
Max Wessel, SAP Chief Designer and Futurist Martin Wezowski and SAP.IO’s managing director Ram Jambunathan (SAP.iO), in sessions including how to scale-up an enterprise startup, how startups win large enterprise customers, and what the enterprise future looks like.

Check out the complete agenda. Don’t miss this show! This line-up is a view into the future like none other. 

Grab your $349 tickets today, and don’t wait til the day of to book because prices go up at the door!

We still have two Startup Demo Tables left. Each table comes with four tickets and a prime location to demo your startup on the expo floor. Book your demo table now before they’re all gone!

Aug
15
2019
--

Microsoft Azure CTO Mark Russinovich will join us for TC Sessions: Enterprise on September 5

Being the CTO for one of the three major hypercloud providers may seem like enough of a job for most people, but Mark Russinovich, the CTO of Microsoft Azure, has a few other talents in his back pocket. Russinovich, who will join us for a fireside chat at our TechCrunch Sessions: Enterprise event in San Francisco on September 5 (p.s. early-bird sale ends Friday), is also an accomplished novelist who has published four novels, all of which center around tech and cybersecurity.

At our event, though, we won’t focus on his literary accomplishments (except for maybe his books about Windows Server) as much as on the trends he’s seeing in enterprise cloud adoption. Microsoft, maybe more so than its competitors, always made enterprise customers and their needs the focus of its cloud initiatives from the outset. Today, as the majority of enterprises is looking to move at least some of their legacy workloads into the cloud, they are often stumped by the sheer complexity of that undertaking.

In our fireside chat, we’ll talk about what Microsoft is doing to reduce this complexity and how enterprises can maximize their current investments into the cloud, both for running new cloud-native applications and for bringing legacy applications into the future. We’ll also talk about new technologies that can make the move to the cloud more attractive to enterprises, including the current buzz around edge computing, IoT, AI and more.

Before joining Microsoft, Russinovich, who has a Ph.D. in computer engineering from Carnegie Mellon, was the co-founder and chief architect of Winternals Software, which Microsoft acquired in 2006. During his time at Winternals, Russinovich discovered the infamous Sony rootkit. Over his 13 years at Microsoft, he moved from Technical Fellow up to the CTO position for Azure, which continues to grow at a rapid clip as it looks to challenge AWS’s leadership in total cloud revenue.

Tomorrow, Friday, August 16 is your last day to save $100 on tickets before prices go up. Book your early-bird tickets now and keep that Benjamin in your pocket.

If you’re an early-stage startup, we only have three demo table packages left! Each demo package comes with four tickets and a great location for your company to get in front of attendees. Book your demo package today before we sell out!

Aug
15
2019
--

How Facebook does IT

If you have ever worked at any sizable company, the word “IT” probably doesn’t conjure up many warm feelings. If you’re working for an old, traditional enterprise company, you probably don’t expect anything else, though. If you’re working for a modern tech company, though, chances are your expectations are a bit higher. And once you’re at the scale of a company like Facebook, a lot of the third-party services that work for smaller companies simply don’t work anymore.

To discuss how Facebook thinks about its IT strategy and why it now builds most of its IT tools in-house, I sat down with the company’s CIO, Atish Banerjea, at its Menlo Park headquarter.

Before joining Facebook in 2016 to head up what it now calls its “Enterprise Engineering” organization, Banerjea was the CIO or CTO at companies like NBCUniversal, Dex One and Pearson.

“If you think about Facebook 10 years ago, we were very much a traditional IT shop at that point,” he told me. “We were responsible for just core IT services, responsible for compliance and responsible for change management. But basically, if you think about the trajectory of the company, were probably about 2,000 employees around the end of 2010. But at the end of last year, we were close to 37,000 employees.”

Traditionally, IT organizations rely on third-party tools and software, but as Facebook grew to this current size, many third-party solutions simply weren’t able to scale with it. At that point, the team decided to take matters into its own hands and go from being a traditional IT organization to one that could build tools in-house. Today, the company is pretty much self-sufficient when it comes to running its IT operations, but getting to this point took a while.

“We had to pretty much reinvent ourselves into a true engineering product organization and went to a full ‘build’ mindset,” said Banerjea. That’s not something every organization is obviously able to do, but, as Banerjea joked, one of the reasons why this works at Facebook “is because we can — we have that benefit of the talent pool that is here at Facebook.”

IMG 20190702 125344

The company then took this talent and basically replicated the kind of team it would help on the customer side to build out its IT tools, with engineers, designers, product managers, content strategies, people and research. “We also made the decision at that point that we will hold the same bar and we will hold the same standards so that the products we create internally will be as world-class as the products we’re rolling out externally.”

One of the tools that wasn’t up to Facebook’s scaling challenges was video conferencing. The company was using a third-party tool for that, but that just wasn’t working anymore. In 2018, Facebook was consuming about 20 million conference minutes per month. In 2019, the company is now at 40 million per month.

Besides the obvious scaling challenge, Facebook is also doing this to be able to offer its employees custom software that fits their workflows. It’s one thing to adapt existing third-party tools, after all, and another to build custom tools to support a company’s business processes.

Banerjea told me that creating this new structure was a relatively easy sell inside the company. Every transformation comes with its own challenges, though. For Facebook’s Enterprise  Engineering team, that included having to recruit new skill sets into the organization. The first few months of this process were painful, Banerjea admitted, as the company had to up-level the skills of many existing employees and shed a significant number of contractors. “There are certain areas where we really felt that we had to have Facebook DNA in order to make sure that we were actually building things the right way,” he explained.

Facebook’s structure creates an additional challenge for the team. When you’re joining Facebook as a new employee, you have plenty of teams to choose from, after all, and if you have the choice of working on Instagram or WhatsApp or the core Facebook app — all of which touch millions of people — working on internal tools with fewer than 40,000 users doesn’t sound all that exciting.

“When young kids who come straight from college and they come into Facebook, they don’t know any better. So they think this is how the world is,” Banerjea said. “But when we have experienced people come in who have worked at other companies, the first thing I hear is ‘oh my goodness, we’ve never seen internal tools of this caliber before.’ The way we recruit, the way we do performance management, the way we do learning and development — every facet of how that employee works has been touched in terms of their life cycle here.”

Event Center 02

Facebook first started building these internal tools around 2012, though it wasn’t until Banerjea joined in 2016 that it rebranded the organization and set up today’s structure. He also noted that some of those original tools were good, but not up to the caliber employees would expect from the company.

“The really big change that we went through was up-leveling our building skills to really become at the same caliber as if we were to build those products for an external customer. We want to have the same experience for people internally.”

The company went as far as replacing and rebuilding the commercial Enterprise Resource Planning (ERP) system it had been using for years. If there’s one thing that big companies rely on, it’s their ERP systems, given they often handle everything from finance and HR to supply chain management and manufacturing. That’s basically what all of their backend tools rely on (and what companies like SAP, Oracle and others charge a lot of money for). “In that 2016/2017 time frame, we realized that that was not a very good strategy,” Banerjea said. In Facebook’s case, the old ERP handled the inventory management for its data centers, among many other things. When that old system went down, the company couldn’t ship parts to its data centers.

“So what we started doing was we started peeling off all the business logic from our backend ERP and we started rewriting it ourselves on our own platform,” he explained. “Today, for our ERP, the backend is just the database, but all the business logic, all of the functionality is actually all custom written by us on our own platform. So we’ve completely rewritten our ERP, so to speak.”

In practice, all of this means that ideally, Facebook’s employees face far less friction when they join the company, for example, or when they need to replace a broken laptop, get a new phone to test features or simply order a new screen for their desk.

One classic use case is onboarding, where new employees get their company laptop, mobile phones and access to all of their systems, for example. At Facebook, that’s also the start of a six-week bootcamp that gets new engineers up to speed with how things work at Facebook. Back in 2016, when new classes tended to still have less than 200 new employees, that was still mostly a manual task. Today, with far more incoming employees, the Enterprise Engineering team has automated most of that — and that includes managing the supply chain that ensures the laptops and phones for these new employees are actually available.

But the team also built the backend that powers the company’s more traditional IT help desks, where employees can walk up and get their issues fixed (and passwords reset).

Event Center 10

To talk more about how Facebook handles the logistics of that, I sat down with Koshambi Shah, who heads up the company’s Enterprise Supply Chain organization, which pretty much handles every piece of hardware and software the company delivers and deploys to its employees around the world (and that global nature of the company brings its own challenges and additional complexity). The team, which has fewer than 30 people, is made up of employees with experience in manufacturing, retail and consumer supply chains.

Typically, enterprises offer their employees a minimal set of choices when it comes to the laptops and phones they issue to their employees, and the operating systems that can run on them tend to be limited. Facebook’s engineers have to be able to test new features on a wide range of devices and operating systems. There are, after all, still users on the iPhone 4s or BlackBerry that the company wants to support. To do this, Shah’s organization actually makes thousands of SKUs available to employees and is able to deliver 98% of them within three days or less. It’s not just sending a laptop via FedEx, though. “We do the budgeting, the financial planning, the forecasting, the supply/demand balancing,” Shah said. “We do the asset management. We make sure the asset — what is needed, when it’s needed, where it’s needed — is there consistently.”

In many large companies, every asset request is double guessed. Facebook, on the other hand, places a lot of trust in its employees, it seems. There’s a self-service portal, the Enterprise Store, that allows employees to easily request phones, laptops, chargers (which get lost a lot) and other accessories as needed, without having to wait for approval (though if you request a laptop every week, somebody will surely want to have a word with you). Everything is obviously tracked in detail, but the overall experience is closer to shopping at an online retailer than using an enterprise asset management system. The Enterprise Store will tell you where a device is available, for example, so you can pick it up yourself (but you can always have it delivered to your desk, too, because this is, after all, a Silicon Valley company).

A73A6581

For accessories, Facebook also offers self-service vending machines, and employees can walk up to the help desk.

The company also recently introduced an Amazon Locker-style setup that allows employees to check out devices as needed. At these smart lockers, employees simply have to scan their badge, choose a device and, once the appropriate door has opened, pick up the phone, tablet, laptop or VR devices they were looking for and move on. Once they are done with it, they can come back and check the device back in. No questions asked. “We trust that people make the right decision for the good of the company,” Shah said. For laptops and other accessories, the company does show the employee the price of those items, though, so it’s clear how much a certain request costs the company. “We empower you with the data for you to make the best decision for your company.”

Talking about cost, Shah told me the Supply Chain organization tracks a number of metrics. One of those is obviously cost. “We do give back about 4% year-over-year, that’s our commitment back to the businesses in terms of the efficiencies we build for every user we support. So we measure ourselves in terms of cost per supported user. And we give back 4% on an annualized basis in the efficiencies.”

Unsurprisingly, the company has by now gathered enough data about employee requests (Shah said the team fulfills about half a million transactions per year) that it can use machine learning to understand trends and be proactive about replacing devices, for example.

IMG 8238

Facebooks’ Enterprise Engineering group doesn’t just support internal customers, though. Another interesting aspect to Facebook’s Enterprise Engineering group is that it also runs the company’s internal and external events, including the likes of F8, the company’s annual developer conference. To do this, the company built out conference rooms that can seat thousands of people, with all of the logistics that go with that.

The company also showed me one of its newest meeting rooms where there are dozens of microphones and speakers hanging from the ceiling that make it easier for everybody in the room to participate in a meeting and be heard by everybody else. That’s part of what the organization’s “New Builds” team is responsible for, and something that’s possible because the company also takes a very hands-on approach to building and managing its offices.

Facebook also runs a number of small studios in its Menlo Park and New York offices, where both employees and the occasional external VIP can host Facebook Live videos.

Event Center 56

Indeed, live video, it seems, is one of the cornerstones of how Facebook employees collaborate and help employees who work from home. Typically, you’d just use the camera on your laptop or maybe a webcam connected to your desktop to do so. But because Facebook actually produces its own camera system with the consumer-oriented Portal, Banerjea’s team decided to use that.

“What we have done is we have actually re-engineered the Portal,” he told me. “We have connected with all of our video conferencing systems in the rooms. So if I have a Portal at home, I can dial into my video conferencing platform and have a conference call just like I’m sitting in any other conference room here in Facebook. And all that software, all the engineering on the portal, that has been done by our teams — some in partnership with our production teams, but a lot of it has been done with Enterprise Engineering.”

Unsurprisingly, there are also groups that manage some of the core infrastructure and security for the company’s internal tools and networks. All of those tools run in the same data centers as Facebook’s consumer-facing applications, though they are obviously sandboxed and isolated from them.

It’s one thing to build all of these tools for internal use, but now, the company is also starting to think about how it can bring some of these tools it built for internal use to some of its external customers. You may not think of Facebook as an enterprise company, but with its Workplace collaboration tool, it has an enterprise service that it sells externally, too. Last year, for the first time, Workplace added a new feature that was incubated inside of Enterprise Engineering. That feature was a version of Facebook’s public Safety Check that the Enterprise Engineering team had originally adapted to the company’s own internal use.

“Many of these things that we are building for Facebook, because we are now very close partners with our Workplace team — they are in the enterprise software business and we are the enterprise software group for Facebook — and many [features] we are building for Facebook are of interest to Workplace customers.”

As Workplace hit the market, Banerjea ended up talking to the CIOs of potential users, including the likes of Delta Air Lines, about how Facebook itself used Workplace internally. But as companies started to adopt Workplace, they realized that they needed integrations with existing third-party services like ERP platforms and Salesforce. Those companies then asked Facebook if it could build those integrations or work with partners to make them available. But at the same time, those customers got exposed to some of the tools that Facebook itself was building internally.

“Safety Check was the first one,” Banerjea said. “We are actually working on three more products this year.” He wouldn’t say what these are, of course, but there is clearly a pipeline of tools that Facebook has built for internal use that it is now looking to commercialize. That’s pretty unusual for any IT organization, which, after all, tends to only focus on internal customers. I don’t expect Facebook to pivot to an enterprise software company anytime soon, but initiatives like this are clearly important to the company and, in some ways, to the morale of the team.

This creates a bit of friction, too, though, given that the Enterprise Engineering group’s mission is to build internal tools for Facebook. “We are now figuring out the deployment model,” Banerjea said. Who, for example, is going to support the external tools the team built? Is it the Enterprise Engineering group or the Workplace team?

Chances are then, that Facebook will bring some of the tools it built for internal use to more enterprises in the long run. That definitely puts a different spin on the idea of the consumerization of enterprise tech. Clearly, not every company operates at the scale of Facebook and needs to build its own tools — and even some companies that could benefit from it don’t have the resources to do so. For Facebook, though, that move seems to have paid off and the tools I saw while talking to the team definitely looked more user-friendly than any off-the-shelf enterprise tools I’ve seen at other large companies.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com