Apr
21
2018
--

Pivotal CEO talks IPO and balancing life in Dell family of companies

Pivotal has kind of a strange role for a company. On one hand its part of the EMC federation companies that Dell acquired in 2016 for a cool $67 billion, but it’s also an independently operated entity within that broader Dell family of companies — and that has to be a fine line to walk.

Whatever the challenges, the company went public yesterday and joined VMware as a  separately traded company within Dell. CEO Rob Mee says the company took the step of IPOing because it wanted additional capital.

“I think we can definitely use the capital to invest in marketing and R&D. The wider technology ecosystem is moving quickly. It does take additional investment to keep up,” Mee told TechCrunch just a few hours after his company rang the bell at the New York Stock Exchange.

As for that relationship of being a Dell company, he said that Michael Dell let him know early on after the EMC acquisition that he understood the company’s position. “From the time Dell acquired EMC, Michael was clear with me: You run the company. I’m just here to help. Dell is our largest shareholder, but we run independently. There have been opportunities to test that [since the acquisition] and it has held true,” Mee said.

Mee says that independence is essential because Pivotal has to remain technology-agnostic and it can’t favor Dell products and services over that mission. “It’s necessary because our core product is a cloud-agnostic platform. Our core value proposition is independence from any provider — and Dell and VMware are infrastructure providers,” he said.

That said, Mee also can play both sides because he can build products and services that do align with Dell and VMware offerings. “Certainly the companies inside the Dell family are customers of ours. Michael Dell has encouraged the IT group to adopt our methods and they are doing so,” he said. They have also started working more closely with VMware, announcing a container partnership last year.

Photo: Ron Miller

Overall though he sees his company’s mission in much broader terms, doing nothing less than helping the world’s largest companies transform their organizations. “Our mission is to transform how the world builds software. We are focused on the largest organizations in the world. What is a tailwind for us is that the reality is these large companies are at a tipping point of adopting how they digitize and develop software for strategic advantage,” Mee said.

The stock closed up 5 percent last night, but Mee says this isn’t about a single day. “We do very much focus on the long term. We have been executing to a quarterly cadence and have behaved like a public company inside Pivotal [even before the IPO]. We know how to do that while keeping an eye on the long term,” he said.

Apr
21
2018
--

Through luck and grit, Datadog is fusing the culture of developers and operations

There used to be two cultures in the enterprise around technology. On one side were software engineers, who built out the applications needed by employees to conduct the business of their companies. On the other side were sysadmins, who were territorially protective of their hardware domain — the servers, switches, and storage boxes needed to power all of that software. Many a great comedy routine has been made at the interface of those two cultures, but they remained divergent.

That is, until the cloud changed everything. Suddenly, there was increasing overlap in the skills required for software engineering and operations, as well as a greater need for collaboration between the two sides to effectively deploy applications. Yet, while these two halves eventually became one whole, the software monitoring tools used by them were often entirely separate.

New York City-based Datadog was designed to bring these two cultures together to create a more nimble and collaborative software and operations culture. Founded in 2010 by Olivier Pomel and Alexis Lê-Quôc, the product offers monitoring and analytics for cloud-based workflows, allowing ops team to track and analyze deployments and developers to instrument their applications. Pomel said that “the root of all of this collaboration is to make sure that everyone has the same understanding of the problem.”

The company has had dizzying success. Pomel declined to disclose precise numbers, but says the company had “north of $100 million” of recurring revenue in the past twelve months, and “we have been doubling that every year so far.” The company, headquartered in the New York Times Building in Times Square, employs more than 600 people across its various worldwide offices. The company has raised nearly $150 million of venture capital according to Crunchbase, and is perennially on banker’s short lists for strong IPO prospects.

The real story though is just how much luck and happenstance can help put wind in the sails of a company.

Pomel first met Lê-Quôc while an undergraduate in France. He was working on running the campus network, and helped to discover that Lê-Quôc had hacked the network. Lê-Quôc was eventually disconnected, and Pomel would migrate to IBM’s upstate New York offices after graduation. After IBM, he led technology at Wireless Generation, a K-12 startup, where he ran into Lê-Quôc again, who was heading up ops for the company. The two cultures of develops and ops was glaring at the startup, where “we had developers who hated operations” and there was much “finger-pointing.”

Putting aside any lingering grievances from their undergrad days, the two began to explore how they could ameliorate the cultural differences they witnessed between their respective teams. “Bringing dev and ops together is not a feature, it is core,” Pomel explained. At the same time, they noticed that companies were increasingly talking about building on Amazon Web Services, which in 2009, was still a relatively new concept. They incorporated Datadog in 2010 as a cloud-first monitoring solution, and launched general availability for the product in 2012.

Luck didn’t just bring the founders together twice, it also defined the currents of their market. Datadog was among the first cloud-native monitoring solutions, and the superlative success of cloud infrastructure in penetrating the enterprise the past few years has benefitted the company enormously. We had “exactly the right product at the right time,” Pomel said, and “a lot of it was luck.” He continued, “It’s healthy to recognize that not everything comes from your genius, because what works once doesn’t always work a second time.”

While startups have been a feature in New York for decades, enterprise infrastructure was in many ways in a dark age when the company launched, which made early fundraising difficult. “None of the West Coast investors were listening,” Pomel said, and “East Coast investors didn’t understand the infrastructure space well enough to take risks.” Even when he could get a West Coast VC to chat with him, they “thought it was a form of mental impairment to start an infrastructure startup in New York.”

Those fundraising difficulties ended up proving a boon for Datadog, because it forced the company to connect with customers much earlier and more often than it might have otherwise. Pomel said, “it forced us to spend all of our time with customers and people who were related to the problem” and ultimately, “it grounded us in the customer problem.” Pomel believes that the company’s early DNA of deeply listening to customers has allowed it to continue to outcompete its rivals on the West Coast.

More success is likely to come as companies continue to move their infrastructure onto the cloud. Datadog used to have a roughly even mix of private and public cloud business, and now the balance is moving increasingly toward the public side. Even large financial institutions, which have been reticent in transitioning their infrastructures, have now started to aggressively embrace cloud as the future of computing in the industry, according to Pomel.

Datadog intends to continue to add new modules to its core monitoring toolkit and expand its team. As the company has grown, so has the need to put in place more processes as parts of the company break. Quoting his co-founder, Pomel said the message to employees is “don’t mind the rattling sound — it is a spaceship, not an airliner” and “things are going to break and change, and it is normal.”

Much as Datadog has bridged the gap between developers and ops, Pomel hopes to continue to give back to the New York startup ecosystem by bridging the gap between technical startups and venture capital. He has made a series of angel investments into local emerging enterprise and data startups, including Generable, Seva, and Windmill. Hard work and a lot of luck is propelling Datadog into the top echelon of enterprise startups, pulling New York along with it.

Apr
21
2018
--

Up-and-coming enterprise startups in NYC

New York City has an incredible density of up-and-coming enterprise-focused startups. While the winners are publicized and well-known, we felt it was time to put a bit of a spotlight on younger companies, ones you may not have heard about yet, but are likely to in the coming years.

TechCrunch asked two dozen founders, venture capitalists, and other community members which companies — other than ones they are directly connected to — they thought were most likely to change the enterprise world in the coming years. From a list of 64 nominated startups, we chose twelve we thought best exemplified the potential for New York. All data on venture capital fundraised comes from Crunchbase. Also be sure to check out our in-depth profiles of NS1, Datadog, BigID, Packet, and Timescale as well as Security Scorecard, Uplevel, and HYPR, which were not included on this list.

Apr
21
2018
--

NS1 brings domain name services to the enterprise

When you think about critical infrastructure, DNS or domain naming services might not pop into your head, but what is more important than making sure your website opens quickly and efficiently for your users. NS1 is a New York City startup trying to bring software smarts and automation to the DNS space.

“We’re a DNS and [Internet] traffic management technology company. We sit in a critical path. Companies point domains at our platforms,” company CEO and co-founder Kris Beevers told TechCrunch. That means when you type in the domain name like Google.com, you go to Google and you go there fast. It’s basic internet plumbing, but it’s essential.

Beevers cut his teeth as head of engineering at Voxel, a cloud infrastructure company that was acquired by Internap in 2012 for $35 million. He and his NS1 co-founders saw an opening in the DNS space and launched the company in 2013 with a set of software-defined DNS services. The startup was able to take advantage of the New York startup ecosystem early on to drive some business, even before they went looking for funding, but one incident really helped put the company on the map and effectively double its business.

That event occurred in almost exactly two years ago in 2016. One of NS1’s primary competitors, Dyn, a New Hampshire-based DNS company was the victim of a massive DDoS attack that took down the service for hours. When critical infrastructure like your domain name server goes away, you see the consequences pretty starkly and suddenly customers realized they didn’t just need this service, they needed redundancy in case the primary service went down — and with that attack, NS1’s business effectively doubled overnight.

Suddenly everyone who owned one, needed another for redundancy. One competitor’s misfortune turned out to be highly beneficial for NS1, who turned out to be in the right place at the right time with the right solution. Dyn was actually acquired by Oracle later that year.

“DNS had been around since 1983. The first 20 years were very boring with no commercial ecosystem,” Beevers said. Even when it went commercial in the early 2000s, nobody was looking at this as a software problem. “We saw everyone in this space was a hardware or networking vendor. Nobody was a software company. Nobody had thought about automation or how automation fit into the stack. And nobody saw the big infrastructure trends,” Beevers explained.

They got their start in the adtech startup space that was booming in NYC when they launched in 2013. These companies were willing to take a chance with an unknown startup, partly because they were looking for any edge they could get, and partly because they knew Beevers from his days at Voxall so he wasn’t a completely unknown quantity.

“Our ability around dynamic traffic management and performance reliability gave those ad companies [an advantage].They were able to take a chance on us. If we have a bad day, a customer can’t operate. We had limited infrastructure. They placed a bet on us because of the [positive] impact we had on their business.”

Today the company is growing fast, has raised close to $50 million and has close to 100 employees. While the bulk of those folks are in NYC, they have also opened offices in San Francisco, Londonderry, NH, the UK and Singapore.

Beevers says the Dyn incident in many ways brought the industry closer together. While they compete, they still need to cooperate to keep the domain system up and running. “We compete and are comrades in the internet mess. We will all fall apart if we don’t work together,” he said. As it turned out, being part of the whole New York infrastructure community didn’t hurt either.

Apr
21
2018
--

Full-Metal Packet is hosting the future of cloud infrastructure

Cloud computing has been a revolution for the data center. Rather than investing in expensive hardware and managing a data center directly, companies are relying on public cloud providers like AWS, Google Cloud, and Microsoft Azure to provide general-purpose and high-availability compute, storage, and networking resources in a highly flexible way.

Yet as workflows have moved to the cloud, companies are increasingly realizing that those abstracted resources can be enormously expensive compared to the hardware they used to own. Few companies want to go back to managing hardware directly themselves, but they also yearn to have the price-to-performance level they used to enjoy. Plus, they want to take advantage of a whole new ecosystem of customized and specialized hardware to process unique workflows — think Tensor Processing Units for machine learning applications.

That’s where Packet comes in. The New York City-based startup’s platform offers a highly-customizable infrastructure for running bare metal in the cloud. Rather than sharing an instance with other users, Packet’s customers “own” the hardware they select, so they can use all the resources of that hardware.

Even more interesting is that Packet will also deploy custom hardware to its data centers, which currently number eighteen around the world. So, for instance, if you want to deploy a quantum computing box redundantly in half of those centers, Packet will handle the logistics of installing those boxes, setting them up, and managing that infrastructure for you.

The company was founded in 2014 by Zac Smith, Jacob Smith, and Aaron Welch, and it has raised a total of $12 million in venture capital financing according to Crunchbase, with its last round led by Softbank. “I took the usual path, I went to Juilliard,” Zac Smith, who is CEO, said to me at his office, which overlooks the World Trade Center in downtown Manhattan. Double bass was a first love, but he found his way eventually into internet hosting, working as COO of New York-based Voxel.

At Voxel, Smith said that he grew up in hosting just as the cloud started taking off. “We saw this change in the user from essentially a sysadmin who cared about Tom’s Hardware, to a developer who had never opened a computer but who was suddenly orchestrating infrastructure,” he said.

Innovation is the lifeblood of developers, yet, public clouds were increasingly abstracting away any details of the underlying infrastructure from developers. Smith explained that “infrastructure was becoming increasingly proprietary, the land of few companies.” While he once thought about leaving the hosting world post-Voxel, he and his co-founders saw an opportunity to rethink cloud infrastructure from the metal up.

“Our customer is a millennial developer, 32 years old, and they have never opened an ATX case, and how could you possibly give them IT in the same way,” Smith asked. The idea of Packet was to bring back choice in infrastructure to these developers, while abstracting away the actual data center logistics that none of them wanted to work on. “You can choose your own opinion — we are hardware independent,” he said.

Giving developers more bare metal options is an interesting proposition, but it is Packet’s long-term vision that I think is most striking. In short, the company wants to completely change the model of hardware development worldwide.

VCs are increasingly investing in specialized chips and memory to handle unique processing loads, from machine learning to quantum computing applications. In some cases, these chips can process their workloads exponentially faster compared to general purpose chips, which at scale can save companies millions of dollars.

Packet’s mission is to encourage that ecosystem by essentially becoming a marketplace, connecting original equipment manufacturers with end-user developers. “We use the WeWork model a lot,” Smith said. What he means is that Packet allows you to rent space in its global network of data centers and handle all the logistics of installing and monitoring hardware boxes, much as WeWork allows companies to rent real estate while it handles the minutia like resetting the coffee filter.

In this vision, Packet would create more discerning and diverse buyers, allowing manufacturers to start targeting more specialized niches. Gone are the generic x86 processors from Intel driving nearly all cloud purchases, and in their place could be dozens of new hardware vendors who can build up their brands among developers and own segments of the compute and storage workload.

In this way, developers can hack their infrastructure much as an earlier generation may have tricked out their personal computer. They can now test new hardware more easily, and when they find a particular piece of hardware they like, they can get it running in the cloud in short order. Packet becomes not just the infrastructure operator — but the channel connecting buyers and sellers.

That’s Packet’s big vision. Realizing it will require that hardware manufacturers increasingly build differentiated chips. More importantly, companies will have to have unique workflows, be at a scale where optimizing those workflows is imperative, and realize that they can match those workflows to specific hardware to maximize their cost performance.

That may sound like a tall order, but Packet’s dream is to create exactly that kind of marketplace. If successful, it could transform how hardware and cloud vendors work together and ultimately, the innovation of any 32-year-old millennial developer who doesn’t like plugging a box in, but wants to plug in to innovation.

Apr
21
2018
--

BigID lands in the right place at the right time with GDPR

Every startup needs a little skill and a little luck. BigID, a NYC-based data governance solution has been blessed with both. The company, which helps customers identify sensitive data in big data stores, launched at just about the same time that the EU announced the GDPR data privacy regulations. Today, the company is having trouble keeping up with the business.

While you can’t discount that timing element, you have to have a product that actually solves a problem and BigID appears to meet that criteria. “This how the market is changing by having and demanding more technology-based controls over how data is being used,” company CEO and co-founder Dimitri Sirota told TechCrunch.

Sirota’s company enables customers to identify the most sensitive data from among vast stores of data. In fact, he says some customers have hundreds of millions of users, but their unique advantage is having built the solution more recently. That provides a modern architecture that can scale to meet these big data requirements, while identifying the data that requires your attention in a way that legacy systems just aren’t prepared to do.

“When we first started talking about this [in 2016] people didn’t grok it. They didn’t understand why you would need a privacy-centric approach. Even after 2016 when GDPR passed, most people didn’t see this. [Today] we are seeing a secular change. The assets they collect are valuable, but also incredibly toxic,” he said. It is the responsibility of the data owner to identify and protect the personal data under their purview under the GDPR rules, and that creates a data double-edged sword because you don’t want to be fined for failing to comply.

GDPR is a set of data privacy regulations that are set to take effect in the European Union at the end of May. Companies have to comply with these rules or could face stiff fines. The thing is GDPR could be just the beginning. The company is seeing similar data privacy regulations in Canada, Australia, China and Japan. Something akin go this could also be coming to the United States after Facebook CEO, Mark Zuckerberg appeared before Congress earlier this month. At the very least we could see state-level privacy laws in the US, Sirota said.

Sirota says there are challenges getting funded as a NYC startup because there hadn’t been a strong big enterprise ecosystem in place until recently, but that’s changing. “Starting an enterprise company in New York is challenging. Ed Sim from Boldstart [A New York City early stage VC firm that invests in enterprise startups] has helped educate through investment and partnerships. More challenging, but it’s reaching a new level now,” he said.

The company launched in 2016 and has raised $16.1 million to date. It scored the bulk of that in a $14 million round at the end of January. Just this week at the RSAC Sandbox competition at the RSA Conference in San Francisco, BigID was named the Most Innovative Startup in a big recognition of the work they are doing around GDPR.

Apr
20
2018
--

Kubernetes and Cloud Foundry grow closer

Containers are eating the software world — and Kubernetes is the king of containers. So if you are working on any major software project, especially in the enterprise, you will run into it sooner or later. Cloud Foundry, which hosted its semi-annual developer conference in Boston this week, is an interesting example for this.

Outside of the world of enterprise developers, Cloud Foundry remains a bit of an unknown entity, despite having users in at least half of the Fortune 500 companies (though in the startup world, it has almost no traction). If you are unfamiliar with Cloud Foundry, you can think of it as somewhat similar to Heroku, but as an open-source project with a large commercial ecosystem and the ability to run it at scale on any cloud or on-premises installation. Developers write their code (following the twelve-factor methodology), define what it needs to run and Cloud Foundry handles all of the underlying infrastructure and — if necessary — scaling. Ideally, that frees up the developer from having to think about where their applications will run and lets them work more efficiently.

To enable all of this, the Cloud Foundry Foundation made a very early bet on containers, even before Docker was a thing. Since Kubernetes wasn’t around at the time, the various companies involved in Cloud Foundry came together to build their own container orchestration system, which still underpins much of the service today. As it took off, though, the pressure to bring support for Kubernetes grew inside of the Cloud Foundry ecosystem. Last year, the Foundation announced its first major move in this direction by launching its Kubernetes-based Container Runtime for managing containers, which sits next to the existing Application Runtime. With this, developers can use Cloud Foundry to run and manage their new (and existing) monolithic apps and run them in parallel with the new services they develop.

But remember how Cloud Foundry also still uses its own container service for the Application Runtime? There is really no reason to do that now that Kubernetes (and the various other projects in its ecosystem) have become the default of handling containers. It’s maybe no surprise then that there is now a Cloud Foundry project that aims to rip out the old container management systems and replace them with Kubernetes. The container management piece isn’t what differentiates Cloud Foundry, after all. Instead, it’s the developer experience — and at the end of the day, the whole point of Cloud Foundry is that developers shouldn’t have to care about the internal plumbing of the infrastructure.

There is another aspect to how the Cloud Foundry ecosystem is embracing Kubernetes, too. Since Cloud Foundry is also just software, there’s nothing stopping you from running it on top of Kubernetes, too. And with that, it’s no surprise that some of the largest Cloud Foundry vendors, including SUSE and IBM, are doing exactly that.

The SUSE Cloud Application Platform, which is a certified Cloud Foundry distribution, can run on any public cloud Kubernetes infrastructure, including the Microsoft Azure Container service. As the SUSE team told me, that means it’s not just easier to deploy, but also far less resource-intensive to run.

Similarly, IBM is now offering Cloud Foundry on top of Kubernetes for its customers, though it’s only calling this an experimental product for now. IBM’s GM of Cloud Developer Services Don Boulia stressed that IBM’s customers were mostly looking for ways to run their workloads in an isolated environment that isn’t shared with other IBM customers.

Boulia also stressed that for most customers, it’s not about Kubernetes versus Cloud Foundry. For most of his customers, using Kubernetes by itself is very much about moving their existing applications to the cloud. And for new applications, those customers are then opting to run Cloud Foundry.

That’s something the SUSE team also stressed. One pattern SUSE has seen is that potential customers come to it with the idea of setting up a container environment and then, over the course of the conversation, decide to implement Cloud Foundry as well.

Indeed, the message of this week’s event was very much that Kubernetes and Cloud Foundry are complementary technologies. That’s something Chen Goldberg, Google’s Director of Engineering for Container Engine and Kubernetes, also stressed during a panel discussion at the event.

Both the Cloud Foundry Foundation and the Cloud Native Computing Foundation (CNCF), the home of Kubernetes, are under the umbrella of the Linux Foundation. They take somewhat different approaches to their communities, with Cloud Foundry stressing enterprise users far more than the CNCF. There are probably some politics at play here, but for the most part, the two organizations seem friendly enough — and they do share a number of members. “We are part of CNCF and part of Cloud Foundry foundation,” Pivotal CEO Rob Mee told our own Ron Miller. “Those communities are increasingly sharing tech back and forth and evolving together. Not entirely independent and not competitive either. Lot of complexity and subtlety. CNCF and Cloud Foundry are part of a larger ecosystem with complimentary and converging tech.”

We’ll likely see more of this technology sharing — and maybe collaboration — between the CNCF and Cloud Foundry going forward. The CNCF is, after all, the home of a number of very interesting projects for building cloud-native applications that do have their fair share of use cases in Cloud Foundry, too.

Apr
18
2018
--

Cloud Foundry Foundation looks east as Alibaba joins as a gold member

Cloud Foundry is among the most successful open source project in the enterprise right now. It’s a cloud-agnostic platform-as-a-service offering that helps businesses develop and run their software more efficiently. In many enterprises, it’s now the standard platform for writing new applications. Indeed, half of the Fortune 500 companies now use it in one form or another.

With the imminent IPO of Pivotal, which helped birth the project and still sits at the core of its ecosystem, Cloud Foundry is about to gets its first major moment in the spotlight outside of its core audience. Over the course of the last few years, though, the project and the foundation that manages it have also received the sponsorship of  companies like Cisco, IBM, SAP, SUSE, Google, Microsoft, Ford, Volkswagen and Huawei.

Today, China’s Alibaba Group is joining the Cloud Foundry Foundation as a gold member. Compared to AWS, Azure and Google Cloud, the Alibaba Cloud gets relatively little press, but it’s among the largest clouds in the world. Starting today, Cloud Foundry is also available on the Alibaba Cloud, with support for both the Cloud Foundry application and container runtimes.

Cloud Foundry CTO Chip Childers told me that he expects Alibaba to become an active participant in the open source community. He also noted that Cloud Foundry is seeing quite a bit of growth in China — a sentiment that I’ve seen echoed by other large open source projects, including the likes of OpenStack.

Open source is being heavily adopted in China and many companies are now trying to figure out how to best contribute to these kind of projects. Joining a foundation is an obvious first step. Childers also noted that many traditional enterprises in China are now starting down the path of digital transformation, which is driving the adoption of both open source tools and cloud in general.

Apr
18
2018
--

Cloud.gov makes Cloud Foundry easier to adopt for government agencies

At the Cloud Foundry Summit in Boston, the team behind the U.S. government’s cloud.gov application platform announced that it is now a certified Cloud Foundry platform that is guaranteed to be compatible with other certified providers, like Huawei, IBM, Pivotal, SAP and — also starting today — SUSE. With this, cloud.gov becomes the first government agency to become Cloud Foundry-certified.

The point behind the certification is to ensure that all of the various platforms that support Cloud Foundry are compatible with each other. In the government context, this means that agencies can easily move their workloads between clouds (assuming they have all the necessary government certifications in place). But what’s maybe even more important is that it also ensures skills portability, which should make hiring and finding contractors easier for these agencies. Given that the open source Cloud Foundry project has seen quite a bit of adoption in the private sector, with half of the Fortune 500 companies using it, that’s often an important factor for deciding which platform to build on.

From the outset, cloud.gov, which was launched by the General Services Administration’s 18F office to improve the U.S. government’s public-facing websites and applications, was built on top of Cloud Foundry. Similar agencies in Australia and the U.K. have made the same decision to standardize on the Cloud Foundry platform. Cloud Foundry launched its certification program a few years ago; last year it added another program for certifying the skills of individual developers.

To be able to run government workloads, a cloud platform has to offer a certain set of security requirements. As Cloud Foundry Foundation CTO Chip Childers told me, the work 18F did to get the FedRAMP authorization for cloud.gov helped bring better controls to the upstream project, too, and he stressed that all of the governments that have adopted the platform have contributed to the overall project.

Apr
17
2018
--

Google Cloud releases Dialogflow Enterprise Edition for building chat apps

Building conversational interfaces is a hot new area for developers. Chatbots can be a way to reduce friction in websites and apps and to give customers quick answers to commonly asked questions in a conversational framework. Today, Google announced it was making Dialogflow Enterprise Edition generally available. It had previously been in beta.

This technology came to them via the API.AI acquisition in 2016. Google wisely decided to change the name of the tool along the way, giving it a moniker that more closely matched what it actually does. The company reports that hundreds of thousands of developers are using the tool already to build conversational interfaces.

This isn’t just an all-Google tool, though. It works across voice interface platforms, including Google Assistant, Amazon Alexa and Facebook Messenger, giving developers a tool to develop their chat apps once and use them across several devices without having to change the underlying code in a significant way.

What’s more, with today’s release the company is providing increased functionality and making it easier to transition to the enterprise edition at the same time.

“Starting today, you can combine batch operations that would have required multiple API calls into a single API call, reducing lines of code and shortening development time. Dialogflow API V2 is also now the default for all new agents, integrating with Google Cloud Speech-to-Text, enabling agent management via API, supporting gRPC, and providing an easy transition to Enterprise Edition with no code migration,” Dan Aharon, Google’s product manager for Cloud AI, wrote in a company blog post announcing the tool.

The company showed off a few new customers using Dialogflow to build chat interfaces for their customers, including KLM Royal Dutch Airlines, Domino’s and Ticketmaster.

The new tool, which is available today, supports more than 30 languages and as a generally available enterprise product comes with a support package and service level agreement (SLA).

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com