Oct
28
2020
--

Enso Security raises $6M for its application security posture management platform

Enso Security, a Tel Aviv-based startup that is building a new application security posture management platform, today announced that it has raised a $6 million seed funding round led by YL Ventures, with participation from Jump Capital. Angel investors in this round include HackerOne co-founder and CTO Alex Rice; Sounil Yu, the former chief security scientist at Bank of America; Omkhar Arasaratnam, the former head of Data Protection Technology at JPMorgan Chase and toDay Ventures.

The company was founded by Roy Erlich (CEO), Chen Gour Arie (CPO) and Barak Tawily (CTO). As is so often the case with Israeli security startups, the founding team includes former members of the Israeli Intelligence Corps, but also a lot of hands-on commercial experience. Erlich, for example, was previously the head of application security at Wix, while Gour Arie worked as an application security consultant for numerous companies across Europe and Tawily has a background in pentesting and led a security team at Wix, too.

Image Credits: Enso Security / Getty Images

“It’s no secret that, today, the diversity of R&D allows [companies] to rapidly introduce new applications and push changes to existing ones,” Erlich explained. “But this great complexity for application security teams results in significant AppSec management challenges. These challenges include the difficulty of tracking applications across environments, measuring risks, prioritizing tasks and enforcing uniform Application Security strategies across all applications.”

But as companies push out code faster than ever, the application security teams aren’t able to keep up — and may not even know about every application being developed internally. The team argues that application security today is often a manual effort to identify owners and measure risk, for example — and the resources for application security teams are often limited, especially when compared the size of the overall development team in most companies. Indeed, the Enso team argues that most AppSec teams today spend most of their time creating relationships with developers and performing operational and product-related tasks — and not on application security.

Image Credits: Enso Security / Getty Images

“It’s a losing fight from the application security side because you have no chance to cover everything,” Erlich noted. “Having said that, […] it’s all about managing the risk. You need to make sure that you take data-driven decisions and that you have all the data that you need in one place.”

Enso Security then wants to give these teams a platform that gives them a single pane of glass to discover applications, identify owners, detect changes and capture their security posture. From there, teams can then prioritize and track their tasks and get real-time feedback on what is happening across their tools. The company’s tools currently pull in data from a wide variety of tools, including the likes of JIRA, Jenkins, GitLab, GitHub, Splunk, ServiceNow and the Envoy edge and service proxy. But as the team argues, even getting data from just a few sources already provides benefits for Enso’s users.

Looking ahead, the team plans to continue improving its product and staff up from its small group of seven employees to about 20 in the next year.

“Roy, Chen and Barak have come up with a very elegant solution to a notoriously complex problem space,” said Ofer Schreiber, partner at YL Ventures . “Because they cut straight to visibility — the true heart of this issue — cybersecurity professionals can finally see and manage all of the applications in their environments. This will have an extraordinary impact on the rate of application rollout and enterprise productivity.”

Oct
21
2020
--

Contrast launches its security observability platform

Contrast, a developer-centric application security company with customers that include Liberty Mutual Insurance, NTT Data, AXA and Bandwidth, today announced the launch of its security observability platform. The idea here is to offer developers a single pane of glass to manage an application’s security across its lifecycle, combined with real-time analysis and reporting, as well as remediation tools.

“Every line of code that’s happening increases the risk to a business if it’s not secure,” said Contrast CEO and chairman Alan Naumann. “We’re focused on securing all that code that businesses are writing for both automation and digital transformation.”

Over the course of the last few years, the well-funded company, which raised a $65 million Series D round last year, launched numerous security tools that cover a wide range of use cases, from automated penetration testing to cloud application security and now DevOps — and this new platform is meant to tie them all together.

DevOps, the company argues, is really what necessitates a platform like this, given that developers now push more code into production than ever — and the onus of ensuring that this code is secure is now also often on that.

Image Credits: Contrast

Traditionally, Naumann argues, security services focused on the code itself and looking at traffic.

“We think at the application layer, the same principles of observability apply that have been used in the IT infrastructure space,” he said. “Specifically, we do instrumentation of the code and we weave security sensors into the code as it’s being developed and are looking for vulnerabilities and observing running code. […] Our view is: the world’s most complex systems are best when instrumented, whether it’s an airplane, a spacecraft, an IT infrastructure. We think the same is true for code. So our breakthrough is applying instrumentation to code and observing for security vulnerabilities.”

With this new platform, Contrast is aggregating information from its existing systems into a single dashboard. And while Contrast observes the code throughout its lifecycle, it also scans for vulnerabilities whenever a developers check code into the CI/CD pipeline, thanks to integrations with most of the standard tools like Jenkins. It’s worth noting that the service also scans for vulnerabilities in open-source libraries. Once deployed, Contrast’s new platform keeps an eye on the data that runs through the various APIs and systems the application connects to and scans for potential security issues there as well.

The platform currently supports all of the large cloud providers, like AWS, Azure and Google Cloud, and languages and frameworks, like Java, Python, .NET and Ruby.

Image Credits: Contrast

Oct
20
2020
--

Splunk acquires Plumbr and Rigor to build out its observability platform

Data platform Splunk today announced that it has acquired two startups, Plumbr and Rigor, to build out its new Observability Suite, which is also launching today. Plumbr is an application performance monitoring service, while Rigor focuses on digital experience monitoring, using synthetic monitoring and optimization tools to help businesses optimize their end-user experiences. Both of these acquisitions complement the technology and expertise Splunk acquired when it bought SignalFx for over $1 billion last year.

Splunk did not disclose the price of these acquisitions, but Estonia-based Plumbr had raised about $1.8 million, while Atlanta-based Rigor raised a debt round earlier this year.

When Splunk acquired SignalFx, it said it did so in order to become a leader in observability and APM. As Splunk CTO Tim Tully told me, the idea here now is to accelerate this process.

Image Credits: Splunk

“Because a lot of our users and our customers are moving to the cloud really, really quickly, the way that they monitor [their] applications changed because they’ve gone to serverless and microservices a ton,” he said. “So we entered that space with those acquisitions, we quickly folded them together with these next two acquisitions. What Plumbr and Rigor do is really fill out more of the portfolio.”

He noted that Splunk was especially interested in Plumbr’s bytecode implementation and its real-user monitoring capabilities, and Rigor’s synthetics capabilities around digital experience monitoring (DEM). “By filling in those two pieces of the portfolio, it gives us a really amazing set of solutions because DEM was the missing piece for our APM strategy,” Tully explained.

Image Credits: Splunk

With the launch of its Observability Suite, Splunk is now pulling together a lot of these capabilities into a single product — which also features a new design that makes it stand apart from the rest of Splunk’s tools. It combines logs, metrics, traces, digital experience, user monitoring, synthetics and more.

“At Yelp, our engineers are responsible for hundreds of different microservices, all aimed at helping people find and connect with great local businesses,” said Chris Gordon, Technical Lead at Yelp, where his team has been testing the new suite. “Our Production Observability team collaborates with Engineering to improve visibility into the performance of key services and infrastructure. Splunk gives us the tools to empower engineers to monitor their own services as they rapidly ship code, while also providing the observability team centralized control and visibility over usage to ensure we’re using our monitoring resources as efficiently as possible.”

Oct
19
2020
--

Juniper Networks acquires Boston-area AI SD-WAN startup 128 Technology for $450M

Today Juniper Networks announced it was acquiring smart wide area networking startup 128 Technology for $450 million.

This marks the second AI-fueled networking company Juniper has acquired in the last year and a half after purchasing Mist Systems in March 2019 for $405 million. With 128 Technology, the company gets more AI SD-WAN technology. SD-WAN is short for software-defined wide area networks, which means networks that cover a wide geographical area such as satellite offices, rather than a network in a defined space.

Today, instead of having simply software-defined networking, the newer systems use artificial intelligence to help automate session and policy details as needed, rather than dealing with static policies, which might not fit every situation perfectly.

Writing in a company blog post announcing the deal, executive vice president and chief product officer Manoj Leelanivas sees 128 Technology adding great flexibility to the portfolio as it tries to transition from legacy networking approaches to modern ones driven by AI, especially in conjunction with the Mist purchase.

“Combining 128 Technology’s groundbreaking software with Juniper SD-WAN, WAN Assurance and Marvis Virtual Network Assistant (driven by Mist AI) gives customers the clearest and quickest path to full AI-driven WAN operations — from initial configuration to ongoing AIOps, including customizable service levels (down to the individual user), simple policy enforcement, proactive anomaly detection, fault isolation with recommended corrective actions, self-driving network operations and AI-driven support,” Leelanivas wrote in the blog post.

128 Technologies was founded in 2014 and raised over $96 million, according to Crunchbase data. Its most recent round was a $30 million Series D investment in September 2019 led by G20 Ventures and The Perkins Fund.

In addition to the $450 million, Juniper has asked 128 Technology to issue retention stock bonuses to encourage the startup’s employees to stay on during the transition to the new owners. Juniper has promised to honor this stock under the terms of the deal. The deal is expected to close in Juniper’s fiscal fourth quarter, subject to normal regulatory review.

Oct
19
2020
--

The OpenStack Foundation becomes the Open Infrastructure Foundation

This has been a long time coming, but the OpenStack Foundation today announced that it is changing its name to “Open Infrastructure Foundation,” starting in 2021.

The announcement, which the foundation made at its virtual developer conference, doesn’t exactly come as a surprise. Over the course of the last few years, the organization started adding new projects that went well beyond the core OpenStack project, and renamed its conference to the “Open Infrastructure Summit.” The organization actually filed for the “Open Infrastructure Foundation” trademark back in April.

Image Credits: OpenStack Foundation

After years of hype, the open-source OpenStack project hit a bit of a wall in 2016, as the market started to consolidate. The project itself, which helps enterprises run their private cloud, found its niche in the telecom space, though, and continues to thrive as one of the world’s most active open-source projects. Indeed, I regularly hear from OpenStack vendors that they are now seeing record sales numbers — despite the lack of hype. With the project being stable, though, the Foundation started casting a wider net and added additional projects like the popular Kata Containers runtime and CI/CD platform Zuul.

“We are officially transitioning and becoming the Open Infrastructure Foundation,” long-term OpenStack Foundation executive president Jonathan Bryce told me. “That is something that I think is an awesome step that’s built on the success that our community has spawned both within projects like OpenStack, but also as a movement […], which is [about] how do you give people choice and control as they build out digital infrastructure? And that is, I think, an awesome mission to have. And that’s what we are recognizing and acknowledging and setting up for another decade of doing that together with our great community.”

In many ways, it’s been more of a surprise that the organization waited as long as it did. As the foundation’s COO Mark Collier told me, the team waited because it wanted to be sure that it did this right.

“We really just wanted to make sure that all the stuff we learned when we were building the OpenStack community and with the community — that started with a simple idea of ‘open source should be part of cloud, for infrastructure.’ That idea has just spawned so much more open source than we could have imagined. Of course, OpenStack itself has gotten bigger and more diverse than we could have imagined,” Collier said.

As part of today’s announcement, the group also announced that its board approved four new members at its Platinum tier, its highest membership level: Ant Group, the Alibaba affiliate behind Alipay, embedded systems specialist Wind River, China’s FiberHome (which was previously a Gold member) and Facebook Connectivity. These companies will join the new foundation in January. To become a Platinum member, companies must contribute $350,000 per year to the foundation and have at least two full-time employees contributing to its projects.

“If you look at those companies that we have as Platinum members, it’s a pretty broad set of organizations,” Bryce noted. “AT&T, the largest carrier in the world. And then you also have a company Ant, who’s the largest payment processor in the world and a massive financial services company overall — over to Ericsson, that does telco, Wind River, that does defense and manufacturing. And I think that speaks to that everybody needs infrastructure. If we build a community — and we successfully structure these communities to write software with a goal of getting all of that software out into production, I think that creates so much value for so many people: for an ecosystem of vendors and for a great group of users and a lot of developers love working in open source because we work with smart people from all over the world.”

The OpenStack Foundation’s existing members are also on board and Bryce and Collier hinted at several new members who will join soon but didn’t quite get everything in place for today’s announcement.

We can probably expect the new foundation to start adding new projects next year, but it’s worth noting that the OpenStack project continues apace. The latest of the project’s bi-annual releases, dubbed “Victoria,” launched last week, with additional Kubernetes integrations, improved support for various accelerators and more. Nothing will really change for the project now that the foundation is changing its name — though it may end up benefitting from a reenergized and more diverse community that will build out projects at its periphery.

Oct
09
2020
--

How Roblox completely transformed its tech stack

Picture yourself in the role of CIO at Roblox in 2017.

At that point, the gaming platform and publishing system that launched in 2005 was growing fast, but its underlying technology was aging, consisting of a single data center in Chicago and a bunch of third-party partners, including AWS, all running bare metal (nonvirtualized) servers. At a time when users have precious little patience for outages, your uptime was just two nines, or less than 99% (five nines is considered optimal).

Unbelievably, Roblox was popular in spite of this, but the company’s leadership knew it couldn’t continue with performance like that, especially as it was rapidly gaining in popularity. The company needed to call in the technology cavalry, which is essentially what it did when it hired Dan Williams in 2017.

Williams has a history of solving these kinds of intractable infrastructure issues, with a background that includes a gig at Facebook between 2007 and 2011, where he worked on the technology to help the young social network scale to millions of users. Later, he worked at Dropbox, where he helped build a new internal network, leading the company’s move away from AWS, a major undertaking involving moving more than 500 petabytes of data.

When Roblox approached him in mid-2017, he jumped at the chance to take on another major infrastructure challenge. While they are still in the midst of the transition to a new modern tech stack today, we sat down with Williams to learn how he put the company on the road to a cloud-native, microservices-focused system with its own network of worldwide edge data centers.

Scoping the problem

Oct
07
2020
--

Kong launches Kong Konnect, its cloud-native connectivity platform

At its (virtual) Kong Summit 2020, API platform Kong today announced the launch of Kong Konnect, its managed end-to-end cloud-native connectivity platform. The idea here is to give businesses a single service that allows them to manage the connectivity between their APIs and microservices and help developers and operators manage their workflows across Kong’s API Gateway, Kubernetes Ingress and Kong Service Mesh runtimes.

“It’s a universal control plane delivery cloud that’s consumption-based, where you can manage and orchestrate API gateway runtime, service mesh runtime, and Kubernetes Ingress controller runtime — and even Insomnia for design — all from one platform,” Kong CEO and co-founder Augusto “Aghi” Marietti told me.

The new service is now in private beta and will become generally available in early 2021.

Image Credits: Kong

At the core of the platform is Kong’s new so-called ServiceHub, which provides that single pane of glass for managing a company’s services across the organization (and make them accessible across teams, too).

As Marietti noted, organizations can choose which runtime they want to use and purchase only those capabilities of the service that they currently need. The platform also includes built-in monitoring tools and supports any cloud, Kubernetes provider or on-premises environment, as long as they are Kubernetes-based.

The idea here, too, is to make all these tools accessible to developers and not just architects and operators. “I think that’s a key advantage, too,” Marietti said. “We are lowering the barrier by making a connectivity technology easier to be used by the 50 million developers — not just by the architects that were doing big grand plans at a large company.”

To do this, Konnect will be available as a self-service platform, reducing the friction of adopting the service.

Image Credits: Kong

This is also part of the company’s grander plan to go beyond its core API management services. Those services aren’t going away, but they are now part of the larger Kong platform. With its open-source Kong API Gateway, the company built the pathway to get to this point, but that’s a stable product now and it’s now clearly expanding beyond that with this cloud connectivity play that takes the company’s existing runtimes and combines them to provide a more comprehensive service.

“We have upgraded the vision of really becoming an end-to-end cloud connectivity company,” Marietti said. “Whether that’s API management or Kubernetes Ingress, […] or Kuma Service Mesh. It’s about connectivity problems. And so the company uplifted that solution to the enterprise.”

 

Oct
05
2020
--

As it closes in on Arm, Nvidia announces UK supercomputer dedicated to medical research

As Nvidia continues to work through its deal to acquire Arm from SoftBank for $40 billion, the computing giant is making another big move to lay out its commitment to investing in U.K. technology. Today the company announced plans to develop Cambridge-1, a new £40 million AI supercomputer that will be used for research in the health industry in the country, the first supercomputer built by Nvidia specifically for external research access, it said.

Nvidia said it is already working with GSK, AstraZeneca, London hospitals Guy’s and St Thomas’ NHS Foundation Trust, King’s College London and Oxford Nanopore to use the Cambridge-1. The supercomputer is due to come online by the end of the year and will be the company’s second supercomputer in the country. The first is already in development at the company’s AI Center of Excellence in Cambridge, and the plan is to add more supercomputers over time.

The growing role of AI has underscored an interesting crossroads in medical research. On one hand, leading researchers all acknowledge the role it will be playing in their work. On the other, none of them (nor their institutions) have the resources to meet that demand on their own. That’s driving them all to get involved much more deeply with big tech companies like Google, Microsoft and, in this case, Nvidia, to carry out work.

Alongside the supercomputer news, Nvidia is making a second announcement in the area of healthcare in the U.K.: it has inked a partnership with GSK, which has established an AI hub in London, to build AI-based computational processes that will be used in drug vaccine and discovery — an especially timely piece of news, given that we are in a global health pandemic and all drug makers and researchers are on the hunt to understand more about, and build vaccines for, COVID-19.

The news is coinciding with Nvidia’s industry event, the GPU Technology Conference.

“Tackling the world’s most pressing challenges in healthcare requires massively powerful computing resources to harness the capabilities of AI,” said Jensen Huang, founder and CEO of Nvidia, in his keynote at the event. “The Cambridge-1 supercomputer will serve as a hub of innovation for the U.K., and further the groundbreaking work being done by the nation’s researchers in critical healthcare and drug discovery.”

The company plans to dedicate Cambridge-1 resources in four areas, it said: industry research, in particular joint research on projects that exceed the resources of any single institution; university granted compute time; health-focused AI startups; and education for future AI practitioners. It’s already building specific applications in areas, like the drug discovery work it’s doing with GSK, that will be run on the machine.

The Cambridge-1 will be built on Nvidia’s DGX SuperPOD system, which can process 400 petaflops of AI performance and 8 petaflops of Linpack performance. Nvidia said this will rank it as the 29th fastest supercomputer in the world.

“Number 29” doesn’t sound very groundbreaking, but there are other reasons why the announcement is significant.

For starters, it underscores how the supercomputing market — while still not a mass-market enterprise — is increasingly developing more focus around specific areas of research and industries. In this case, it underscores how health research has become more complex, and how applications of artificial intelligence have both spurred that complexity but, in the case of building stronger computing power, also provides a better route — some might say one of the only viable routes in the most complex of cases — to medical breakthroughs and discoveries.

It’s also notable that the effort is being forged in the U.K. Nvidia’s deal to buy Arm has seen some resistance in the market — with one group leading a campaign to stop the sale and take Arm independent — but this latest announcement underscores that the company is already involved pretty deeply in the U.K. market, bolstering Nvidia’s case to double down even further. (Yes, chip reference designs and building supercomputers are different enterprises, but the argument for Nvidia is one of commitment and presence.)

“AI and machine learning are like a new microscope that will help scientists to see things that they couldn’t see otherwise,” said Dr. Hal Barron, chief scientific officer and president, R&D, GSK, in a statement. “NVIDIA’s investment in computing, combined with the power of deep learning, will enable solutions to some of the life sciences industry’s greatest challenges and help us continue to deliver transformational medicines and vaccines to patients. Together with GSK’s new AI lab in London, I am delighted that these advanced technologies will now be available to help the U.K.’s outstanding scientists.”

“The use of big data, supercomputing and artificial intelligence have the potential to transform research and development; from target identification through clinical research and all the way to the launch of new medicines,” added James Weatherall, PhD, head of Data Science and AI, AstraZeneca, in his statement.

“Recent advances in AI have seen increasingly powerful models being used for complex tasks such as image recognition and natural language understanding,” said Sebastien Ourselin, head, School of Biomedical Engineering & Imaging Sciences at King’s College London. “These models have achieved previously unimaginable performance by using an unprecedented scale of computational power, amassing millions of GPU hours per model. Through this partnership, for the first time, such a scale of computational power will be available to healthcare research – it will be truly transformational for patient health and treatment pathways.”

Dr. Ian Abbs, chief executive & chief medical director of Guy’s and St Thomas’ NHS Foundation Trust Officer, said: “If AI is to be deployed at scale for patient care, then accuracy, robustness and safety are of paramount importance. We need to ensure AI researchers have access to the largest and most comprehensive datasets that the NHS has to offer, our clinical expertise, and the required computational infrastructure to make sense of the data. This approach is not only necessary, but also the only ethical way to deliver AI in healthcare – more advanced AI means better care for our patients.”

“Compact AI has enabled real-time sequencing in the palm of your hand, and AI supercomputers are enabling new scientific discoveries in large-scale genomic data sets,” added Gordon Sanghera, CEO, Oxford Nanopore Technologies. “These complementary innovations in data analysis support a wealth of impactful science in the U.K., and critically, support our goal of bringing genomic analysis to anyone, anywhere.”

 

Sep
29
2020
--

How Twilio built its own conference platform

Twilio’s annual customer conference was supposed to happen in May, but like everyone else who had live events scheduled for this year, it ran smack-dab into COVID-19 and was forced to cancel. That left the company wondering how to reimagine the event online. It began an RFP process to find a vendor to help, but eventually concluded it could use its own APIs and built a platform on its own.

That’s a pretty bold move, but one of the key issues facing Twilio was how to recreate the in-person experience of the show floor where people could chat with specific API experts. After much internal deliberation, they realized that was what their communication API products were designed to do.

Once they committed to going their own way, they began a long process that involved figuring out must-have features, building consensus in the company, creating a development and testing cycle and finding third-party partnerships to help them when they ran into the limitations of their own products.

All that work culminates this week when Twilio holds its annual Signal Conference online Wednesday and Thursday. We spoke to In-Young Chang, director of experience at Twilio, to learn how this project came together.

Chang said once the decision was made to go virtual, the biggest issue for them (and for anyone putting on a virtual conference) was how to recreate that human connection that is a natural part of the in-person conference experience.

The company’s first step was to put out a request for proposals with event software vendors. She said that the problem was that these platforms hadn’t been designed for the most part to be fully virtual. At best, they had a hybrid approach, where some people attended virtually, but most were there in person.

“We met with a lot of different vendors, vendors that a lot of big tech companies were using, but there were pros to some of them, and then cons to others, and none of them truly fit everything that we needed, which was connecting our customers to product experts [like we do at our in-person conferences],” Chang told TechCrunch.

Even though they had winnowed the proposals down to a manageable few, they weren’t truly satisfied with what the event software vendors were offering, and they came to a realization.

“Either we find a vendor who can do this fully custom in three months’ time, or [we do it ourselves]. This is what we do. This is in our DNA, so we can make this happen. The hard part became how do you prioritize because once we made the conference fully software-based, the possibilities were endless,” she said.

All of this happened pretty quickly. The team interviewed the vendors in May, and by June made the decision to build it themselves. They began the process of designing the event software they would be using, taking advantage of their own communications capabilities, first and foremost.

The first thing they needed to do was meet with various stakeholders inside the company and figure out the must-have features in their custom platform. She said that reeling in people’s ambitions for version 1.0 of the platform was part of the challenge that they faced trying to pull this together.

“We only had three months. It wasn’t going to be totally perfect. There had to be some prioritization and compromises, but with our APIs we [felt that we] could totally make this happen,” Chang said.

They started meeting with different groups across the company to find out their must-haves. They knew that they wanted to recreate this personal contact experience. Other needs included typical conference activities like being able to collect leads and build agendas and the kinds of things you would expect to do at any conference, whether in-person or virtual.

As the team met with the various constituencies across the company, they began to get a sense of what they needed to build and they created a priorities document, which they reviewed with the Signal leadership team. “There were some hard conversations and some debates, but everyone really had goodwill toward each other knowing that we only had a few months,” she said.

Signal Concierge Agent for virtual Twilio Signal Conference

Signal Concierge Agent helps attendees navigate the online conference. Image Credits: Twilio

The team believed it could build a platform that met the company’s needs, but with only 10 developers working on it, they had a huge challenge to get it done in three months.

With one of the major priorities putting customers together with the right Twilio personnel, they decided to put their customer service platform, Twilio Flex, to work on the problem. Flex combines voice, messaging, video and chat in one interface. While the conference wasn’t a pure customer service issue, they believed that they could leverage the platform to direct requests to people with the right expertise and recreate the experience of walking up to the booth and asking questions of a Twilio employee with a particular skill set.

“Twilio Flex has Taskrouter, which allows us to assign agents unique skills-based characteristics, like you’re a video expert, so I’m going to tag you as a video expert. If anyone has a question around video, I know that we can route it directly to you,” Chang explained.

They also built a bot companion, called Signal Concierge, that moves through the online experience with each attendee and helps them find what they need, applying their customer service approach to the conference experience.

“Signal Concierge is your conference companion, so that if you ever have a question about what session you should go to next or [you want to talk to an expert], there’s just one place that you have to go to get an answer to your question, and we’ll be there to help you with it,” she said.

The company couldn’t do everything with Twilio’s tools, so it turned to third parties in those cases. “We continued our partnership with Klik, a conference data and badging platform all available via API. And Perficient, a Twilio SI partner we hired to augment the internal team to more quickly implement the custom Twilio Flex experience in the tight time frame we had. And Plexus, who provided streaming capabilities that we could use in an open-source video player,” she said.

They spent September testing what they built, making sure the Signal Concierge was routing requests correctly and all the moving parts were working. They open the virtual doors on Wednesday morning and get to see how well they pulled it off.

Chang says she is proud of what her team pulled off, but recognizes this is a first pass and future versions will have additional features that they didn’t have time to build.

“This is V1 of the platform. It’s not by any means exactly what we want, but we’re really proud of what we were able to accomplish from scoping the content to actually building the platform within three months’ time,” she said.

Sep
22
2020
--

Microsoft challenges Twilio with the launch of Azure Communication Services

Microsoft today announced the launch of Azure Communication Services, a new set of features in its cloud that enable developers to add voice and video calling, chat and text messages to their apps, as well as old-school telephony.

The company describes the new set of services as the “first fully managed communication platform offering from a major cloud provider,” and that seems right, given that Google and AWS offer some of these features, including the AWS notification service, for example, but not as part of a cohesive communication service. Indeed, it seems Azure Communication Service is more of a competitor to the core features of Twilio or up-and-coming MessageBird.

Over the course of the last few years, Microsoft has built up a lot of experience in this area, in large parts thanks to the success of its Teams service. Unsurprisingly, that’s something Microsoft is also playing up in its announcement.

“Azure Communication Services is built natively on top a global, reliable cloud — Azure. Businesses can confidently build and deploy on the same low latency global communication network used by Microsoft Teams to support over 5 billion meeting minutes in a single day,” writes Scott Van Vliet, corporate vice president for Intelligent Communication at the company.

Microsoft also stresses that it offers a set of additional smart services that developers can tap into to build out their communication services, including its translation tools, for example. The company also notes that its services are encrypted to meet HIPPA and GDPR standards.

Like similar services, developers access the various capabilities through a set of new APIs and SDKs.

As for the core services, the capabilities here are pretty much what you’d expect. There’s voice and video calling (and the ability to shift between them). There’s support for chat and, starting in October, users will also be able to send text messages. Microsoft says developers will be able to send these to users anywhere, with Microsoft positioning it as a global service.

Provisioning phone numbers, too, is part of the services and developers will be able to provision those for in-bound and out-bound calls, port existing numbers, request new ones and — most importantly for contact-center users — integrate them with existing on-premises equipment and carrier networks.

“Our goal is to meet businesses where they are and provide solutions to help them be resilient and move their business forward in today’s market,” writes Van Vliet. “We see rich communication experiences – enabled by voice, video, chat, and SMS – continuing to be an integral part in how businesses connect with their customers across devices and platforms.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com