Jun
22
2020
--

Hasura launches managed cloud service for its open-source GraphQL API platform

Hasura is an open-source engine that can connect to PostgreSQL databases and microservices across hybrid- and multi-cloud environments and then automatically build a GraphQL API backend for them, making it easier for developers to then build their own data-driven applications on top of this unified API . For a while now, the San Francisco-based startup has offered a paid version (Hasura Pro) with enterprise-ready reliability and security tools, in addition to its free open-source version. Today, the company launched Hasura Cloud, which takes the existing Pro version, adds a number of cloud-specific features like dynamic caching, auto-scaling and consumption-based pricing, and brings those together in a fully managed service.

Image Credits: Hasura

At its core, Hasura’s service promises businesses the ability to bring together data from their various siloed databases and allow their developers to extract value from them through its GraphQL APIs. While GraphQL is still relatively new, the Facebook-incubated technology has quickly become extremely popular among many development teams.

Before founding the company and launching it in 2018, Hasura CEO and co-founder Tanmai Gopal worked for a consulting firm — and like with so many founders, that’s where he got the inspiration for the service.

“One of the key things that we noticed was that in the entire landscape, computing is becoming better, there are better frameworks, it is easier to deploy code, databases are becoming better and they kind of work everywhere,” he said. “But this kind of piece in the middle that is still a bottleneck and that there isn’t really a good solution for is this data access piece.” Almost by default, most companies host data in various SaaS services and databases — and now they were trying to figure out how to develop apps based on this for both internal and external consumers, noted Gopal. “This data distribution problem was this bottleneck where everybody would just spend massive amounts of time and money. And we invented a way of kind of automating that,” he explained.

The choice of GraphQL was also pretty straightforward, especially because GraphQL services are an easy way for developers to consume data (even though, as Gopal noted, it’s not always fun to build the GraphQL service itself). One thing that’s unusual and worth noting about the core Hasura engine itself is that it is written in Haskell, which is a rather unusual choice.

Image Credits: Hasura

The team tells me that Hasura is now nearing 50 million downloads for its free version and the company is seeing large and small users from across various industries relying on its products, which is probably no surprise, given that the company is trying to solve a pretty universal problem around data access and consumption.

Over the last few quarters, the team worked on launching its cloud service. “We’ve been thinking of the cloud in a very different way,” Gopal said. “It’s not your usual, take the open-source solution and host it, like a MongoDB Atlas or Confluent. What we’ve done is we’ve said, we’re going to re-engineer the open-source solution to be entirely multi-tenant and be completely pay-per pricing.”

Given this philosophy, it’s no surprise that Hasura’s pricing is purely based on how much data a user moves through the service. “It’s much closer to our value proposition,” Hasura co-founder and COO Rajoshi Ghosh said. “The value proposition is about data access. The big part of it is the fact that you’re getting this data from your databases. But the very interesting part is that this data can actually come from anywhere. This data could be in your third-party services, part of your data could be living in Stripe and it could be living in Salesforce, and it could be living in other services. […] We’re the data access infrastructure in that sense. And this pricing also — from a mental model perspective — makes it much clearer that that’s the value that we’re adding.”

Now, there are obviously plenty of other data-centric API services on the market, but Gopal argues that Hasura has an advantage because of its advanced caching for dynamic data, for example.

Apr
28
2020
--

Checkly raises $2.25M seed round for its monitoring and testing platform

Checkly, a Berlin-based startup that is developing a monitoring and testing platform for DevOps teams, today announced that it has raised a $2.25 million seed round led by Accel. A number of angel investors, including Instana CEO Mirko Novakovic, Zeit CEO Guillermo Rauch and former Twilio CTO Ott Kaukver, also participated in this round.

The company’s SaaS platform allows developers to monitor their API endpoints and web apps — and it obviously alerts you when something goes awry. The transaction monitoring tool makes it easy to regularly test interactions with front-end websites without having to actually write any code. The test software is based on Google’s open-source Puppeteer framework and to build its commercial platform, Checkly also developed Puppeteer Recorder for creating these end-to-end testing scripts in a low-code tool that developers access through a Chrome extension.

The team believes that it’s the combination of end-to-end testing and active monitoring, as well as its focus on modern DevOps teams, that makes Checkly stand out in what is already a pretty crowded market for monitoring tools.

“As a customer in the monitoring market, I thought it had long been stuck in the 90s and I needed a tool that could support teams in JavaScript and work for all the different roles within a DevOps team. I set out to build it, quickly realizing that testing was equally important to address,” said Tim Nolet, who founded the company in 2018. “At Checkly, we’ve created a market-defining tool that our customers have been demanding, and we’ve already seen strong traction through word of mouth. We’re delighted to partner with Accel on building out our vision to become the active reliability platform for DevOps teams.”

Nolet’s co-founders are Hannes Lenke, who founded TestObject (which was later acquired by Sauce Labs), and Timo Euteneuer, who was previously Director Sales EMEA at Sauce Labs.

Tthe company says that it currently has about 125 paying customers who run about 1 million checks per day on its platform. Pricing for its services starts at $7 per month for individual developers, with plans for small teams starting at $29 per month.

Apr
22
2020
--

AWS launches Amazon AppFlow, its new SaaS integration service

AWS today launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand.

Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows and while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.

“Developers spend huge amounts of time writing custom integrations so they can pass data between SaaS applications and AWS services so that it can be analysed; these can be expensive and can often take months to complete,” said AWS principal advocate Martin Beeby in today’s announcement. “If data requirements change, then costly and complicated modifications have to be made to the integrations. Companies that don’t have the luxury of engineering resources might find themselves manually importing and exporting data from applications, which is time-consuming, risks data leakage, and has the potential to introduce human error.”

Every flow (which AWS defines as a call to a source application to transfer data to a destination) costs $0.001 per run, though, in typical AWS fashion, there’s also cost associated with data processing (starting at 0.02 per GB).

“Our customers tell us that they love having the ability to store, process, and analyze their data in AWS. They also use a variety of third-party SaaS applications, and they tell us that it can be difficult to manage the flow of data between AWS and these applications,” said Kurt Kufeld, Vice President, AWS. “Amazon AppFlow provides an intuitive and easy way for customers to combine data from AWS and SaaS applications without moving it across the public Internet. With Amazon AppFlow, our customers bring together and manage petabytes, even exabytes, of data spread across all of their applications – all without having to develop custom connectors or manage underlying API and network connectivity.”

At this point, the number of supported services remains comparatively low, with only 14 possible sources and four destinations (Amazon Redshift and S3, as well as Salesforce and Snowflake). Sometimes, depending on the source you select, the only possible destination is Amazon’s S3 storage service.

Over time, the number of integrations will surely increase, but for now, it feels like there’s still quite a bit more work to do for the AppFlow team to expand the list of supported services.

AWS has long left this market to competitors, even though it has tools like AWS Step Functions for building serverless workflows across AWS services and EventBridge for connections applications. Interestingly, EventBridge currently supports a far wider range of third-party sources, but as the name implies, its focus is more on triggering events in AWS than moving data between applications.

Oct
02
2019
--

Kong acquires Insomnia, launches Kong Studio for API development

API and microservices platform Kong today announced that it has acquired Insomnia, a popular open-source tool for debugging APIs. The company, which also recently announced that it had raised a $43 million Series C round, has already put this acquisition to work by using it to build Kong Studio, a tool for designing, building and maintaining APIs for both REST and GraphQL endpoints.

As Kong CEO and co-founder Augusto Marietti told me, the company wants to expand its platform to cover the full service life cycle. So far, it has mostly focused on the runtime, but now it wants to enable developers to also design and test their services. “We looked at the space and Insomnia is the number one open source API testing platform,” he told me. “And we thought that by having Insomnia in our portfolio, we will get the pre-production part of things and on top of that, we’ll be able to build Kong Studio, which is kind of the other side of Insomnia that allows you to design APIs.”

For Oct. 2 Kong News Kong Service Control Platform

Insomnia launched in 2015, as a side project of its sole developer, Greg Schier. Schier quit his job in 2016 to focus on Insomnia full-time and then open-sourced it in 2017. Today, the project has 100 contributors and the tool is used by “hundreds of thousands of developers,” according to Schier.

Marietti says both the open-source project and the paid Insomnia Plus service will continue to operate as before.

In addition to Kong Studio and the Insomnia acquisition, the company also today launched the latest version of its Enterprise service, the aptly named Kong Enterprise 2020. New features here include support for REST, Kafka Streams and GraphQL. Kong also launched Kong Gateway 2.0 with additional GraphQL support and the ability to write plugins in Go.

Aug
01
2019
--

Dasha AI is calling so you don’t have to

While you’d be hard-pressed to find any startup not brimming with confidence over the disruptive idea they’re chasing, it’s not often you come across a young company as calmly convinced it’s engineering the future as Dasha AI.

The team is building a platform for designing human-like voice interactions to automate business processes. Put simply, it’s using AI to make machine voices a whole lot less robotic.

“What we definitely know is this will definitely happen,” says CEO and co-founder Vladislav Chernyshov. “Sooner or later the conversational AI/voice AI will replace people everywhere where the technology will allow. And it’s better for us to be the first mover than the last in this field.”

“In 2018 in the U.S. alone there were 30 million people doing some kind of repetitive tasks over the phone. We can automate these jobs now or we are going to be able to automate it in two years,” he goes on. “If you multiple it with Europe and the massive call centers in India, Pakistan and the Philippines you will probably have something like close to 120 million people worldwide… and they are all subject for disruption, potentially.”

The New York-based startup has been operating in relative stealth up to now. But it’s breaking cover to talk to TechCrunch — announcing a $2 million seed round, led by RTP Ventures and RTP Global: An early-stage investor that’s backed the likes of Datadog and RingCentral. RTP’s venture arm, also based in NY, writes on its website that it prefers engineer-founded companies — that “solve big problems with technology.” “We like technology, not gimmicks,” the fund warns with added emphasis.

Dasha’s core tech right now includes what Chernyshov describes as “a human-level, voice-first conversation modelling engine;” a hybrid text-to-speech engine which he says enables it to model speech disfluencies (aka, the ums and ahs, pitch changes etc. that characterize human chatter); plus “a fast and accurate” real-time voice activity detection algorithm which detects speech in less than 100 milliseconds, meaning the AI can turn-take and handle interruptions in the conversation flow. The platform also can detect a caller’s gender — a feature that can be useful for healthcare use cases, for example.

Another component Chernyshov flags is “an end-to-end pipeline for semi-supervised learning” — so it can retrain the models in real time “and fix mistakes as they go” — until Dasha hits the claimed “human-level” conversational capability for each business process niche. (To be clear, the AI cannot adapt its speech to an interlocutor in real time — as human speakers naturally shift their accents closer to bridge any dialect gap — but Chernyshov suggests it’s on the roadmap.)

“For instance, we can start with 70% correct conversations and then gradually improve the model up to say 95% of correct conversations,” he says of the learning element, though he admits there are a lot of variables that can impact error rates — not least the call environment itself. Even cutting edge AI is going to struggle with a bad line.

The platform also has an open API so customers can plug the conversation AI into their existing systems — be it telephony, Salesforce software or a developer environment, such as Microsoft Visual Studio.

Currently they’re focused on English, though Chernyshov says the architecture is “basically language agnostic” — but does requires “a big amount of data.”

The next step will be to open up the dev platform to enterprise customers, beyond the initial 20 beta testers, which include companies in the banking, healthcare and insurance sectors — with a release slated for later this year or Q1 2020.

Test use cases so far include banks using the conversation engine for brand loyalty management to run customer satisfaction surveys that can turnaround negative feedback by fast-tracking a response to a bad rating — by providing (human) customer support agents with an automated categorization of the complaint so they can follow up more quickly. “This usually leads to a wow effect,” says Chernyshov.

Ultimately, he believes there will be two or three major AI platforms globally providing businesses with an automated, customizable conversational layer — sweeping away the patchwork of chatbots currently filling in the gap. And, of course, Dasha intends their “Digital Assistant Super Human Alike” to be one of those few.

“There is clearly no platform [yet],” he says. “Five years from now this will sound very weird that all companies now are trying to build something. Because in five years it will be obvious — why do you need all this stuff? Just take Dasha and build what you want.”

“This reminds me of the situation in the 1980s when it was obvious that the personal computers are here to stay because they give you an unfair competitive advantage,” he continues. “All large enterprise customers all over the world… were building their own operating systems, they were writing software from scratch, constantly reinventing the wheel just in order to be able to create this spreadsheet for their accountants.

“And then Microsoft with MS-DOS came in… and everything else is history.”

That’s not all they’re building, either. Dasha’s seed financing will be put toward launching a consumer-facing product atop its B2B platform to automate the screening of recorded message robocalls. So, basically, they’re building a robot assistant that can talk to — and put off — other machines on humans’ behalf.

Which does kind of suggest the AI-fueled future will entail an awful lot of robots talking to each other… ???

Chernyshov says this B2C call-screening app will most likely be free. But then if your core tech looks set to massively accelerate a non-human caller phenomenon that many consumers already see as a terrible plague on their time and mind then providing free relief — in the form of a counter AI — seems the very least you should do.

Not that Dasha can be accused of causing the robocaller plague, of course. Recorded messages hooked up to call systems have been spamming people with unsolicited calls for far longer than the startup has existed.

Dasha’s PR notes Americans were hit with 26.3 billion robocalls in 2018 alone — up “a whopping” 46% on 2017.

Its conversation engine, meanwhile, has only made some 3 million calls to date, clocking its first call with a human in January 2017. But the goal from here on in is to scale fast. “We plan to aggressively grow the company and the technology so we can continue to provide the best voice conversational AI to a market which we estimate to exceed $30 billion worldwide,” runs a line from its PR.

After the developer platform launch, Chernyshov says the next step will be to open access to business process owners by letting them automate existing call workflows without needing to be able to code (they’ll just need an analytic grasp of the process, he says).

Later — pegged for 2022 on the current roadmap — will be the launch of “the platform with zero learning curve,” as he puts it. “You will teach Dasha new models just like typing in a natural language and teaching it like you can teach any new team member on your team,” he explains. “Adding a new case will actually look like a word editor — when you’re just describing how you want this AI to work.”

His prediction is that a majority — circa 60% — of all major cases that business face — “like dispatching, like probably upsales, cross sales, some kind of support etc., all those cases” — will be able to be automated “just like typing in a natural language.”

So if Dasha’s AI-fueled vision of voice-based business process automation comes to fruition, then humans getting orders of magnitude more calls from machines looks inevitable — as machine learning supercharges artificial speech by making it sound slicker, act smarter and seem, well, almost human.

But perhaps a savvier generation of voice AIs will also help manage the “robocaller” plague by offering advanced call screening? And as non-human voice tech marches on from dumb recorded messages to chatbot-style AIs running on scripted rails to — as Dasha pitches it — fully responsive, emoting, even emotion-sensitive conversation engines that can slip right under the human radar maybe the robocaller problem will eat itself? I mean, if you didn’t even realize you were talking to a robot how are you going to get annoyed about it?

Dasha claims 96.3% of the people who talk to its AI “think it’s human,” though it’s not clear on what sample size the claim is based. (To my ear there are definite “tells” in the current demos on its website. But in a cold-call scenario it’s not hard to imagine the AI passing, if someone’s not paying much attention.)

The alternative scenario, in a future infested with unsolicited machine calls, is that all smartphone OSes add kill switches, such as the one in iOS 13 — which lets people silence calls from unknown numbers.

And/or more humans simply never pick up phone calls unless they know who’s on the end of the line.

So it’s really doubly savvy of Dasha to create an AI capable of managing robot calls — meaning it’s building its own fallback — a piece of software willing to chat to its AI in the future, even if actual humans refuse.

Dasha’s robocall screener app, which is slated for release in early 2020, will also be spammer-agnostic — in that it’ll be able to handle and divert human salespeople too, as well as robots. After all, a spammer is a spammer.

“Probably it is the time for somebody to step in and ‘don’t be evil,’ ” says Chernyshov, echoing Google’s old motto, albeit perhaps not entirely reassuringly given the phrase’s lapsed history — as we talk about the team’s approach to ecosystem development and how machine-to-machine chat might overtake human voice calls.

“At some point in the future we will be talking to various robots much more than we probably talk to each other — because you will have some kind of human-like robots at your house,” he predicts. “Your doctor, gardener, warehouse worker, they all will be robots at some point.”

The logic at work here is that if resistance to an AI-powered Cambrian Explosion of machine speech is futile, it’s better to be at the cutting edge, building the most human-like robots — and making the robots at least sound like they care.

Dasha’s conversational quirks certainly can’t be called a gimmick. Even if the team’s close attention to mimicking the vocal flourishes of human speech — the disfluencies, the ums and ahs, the pitch and tonal changes for emphasis and emotion — might seem so at first airing.

In one of the demos on its website you can hear a clip of a very chipper-sounding male voice, who identifies himself as “John from Acme Dental,” taking an appointment call from a female (human), and smoothly dealing with multiple interruptions and time/date changes as she changes her mind. Before, finally, dealing with a flat cancellation.

A human receptionist might well have got mad that the caller essentially just wasted their time. Not John, though. Oh no. He ends the call as cheerily as he began, signing off with an emphatic: “Thank you! And have a really nice day. Bye!”

If the ultimate goal is Turing Test levels of realism in artificial speech — i.e. a conversation engine so human-like it can pass as human to a human ear — you do have to be able to reproduce, with precision timing, the verbal baggage that’s wrapped around everything humans say to each other.

This tonal layer does essential emotional labor in the business of communication, shading and highlighting words in a way that can adapt or even entirely transform their meaning. It’s an integral part of how we communicate. And thus a common stumbling block for robots.

So if the mission is to power a revolution in artificial speech that humans won’t hate and reject, then engineering full spectrum nuance is just as important a piece of work as having an amazing speech recognition engine. A chatbot that can’t do all that is really the gimmick.

Chernyshov claims Dasha’s conversation engine is “at least several times better and more complex than [Google] Dialogflow, [Amazon] Lex, [Microsoft] Luis or [IBM] Watson,” dropping a laundry list of rival speech engines into the conversation.

He argues none are on a par with what Dasha is being designed to do.

The difference is the “voice-first modeling engine.” “All those [rival engines] were built from scratch with a focus on chatbots — on text,” he says, couching modeling voice conversation “on a human level” as much more complex than the more limited chatbot-approach — and hence what makes Dasha special and superior.

“Imagination is the limit. What we are trying to build is an ultimate voice conversation AI platform so you can model any kind of voice interaction between two or more human beings.”

Google did demo its own stuttering voice AI — Duplex — last year, when it also took flak for a public demo in which it appeared not to have told restaurant staff up front they were going to be talking to a robot.

Chernyshov isn’t worried about Duplex, though, saying it’s a product, not a platform.

“Google recently tried to headhunt one of our developers,” he adds, pausing for effect. “But they failed.”

He says Dasha’s engineering staff make up more than half (28) its total headcount (48), and include two doctorates of science; three PhDs; five PhD students; and 10 masters of science in computer science.

It has an R&D office in Russia, which Chernyshov says helps makes the funding go further.

“More than 16 people, including myself, are ACM ICPC finalists or semi finalists,” he adds — likening the competition to “an Olympic game but for programmers.” A recent hire — chief research scientist, Dr. Alexander Dyakonov — is both a doctor of science professor and former Kaggle No.1 GrandMaster in machine learning. So with in-house AI talent like that you can see why Google, uh, came calling…

Dasha

But why not have Dasha ID itself as a robot by default? On that Chernyshov says the platform is flexible — which means disclosure can be added. But in markets where it isn’t a legal requirement, the door is being left open for “John” to slip cheerily by. “Bladerunner” here we come.

The team’s driving conviction is that emphasis on modeling human-like speech will, down the line, allow their AI to deliver universally fluid and natural machine-human speech interactions, which in turn open up all sorts of expansive and powerful possibilities for embeddable next-gen voice interfaces. Ones that are much more interesting than the current crop of gadget talkies.

This is where you could raid sci-fi/pop culture for inspiration. Such as Kitt, the dryly witty talking car from the 1980s TV series “Knight Rider.” Or, to throw in a British TV reference, Holly the self-depreciating yet sardonic human-faced computer in “Red Dwarf.” (Or, indeed, Kryten, the guilt-ridden android butler.) Chernyshov’s suggestion is to imagine Dasha embedded in a Boston Dynamics robot. But surely no one wants to hear those crawling nightmares scream…

Dasha’s five-year+ roadmap includes the eyebrow-raising ambition to evolve the technology to achieve “a general conversational AI.” “This is a science fiction at this point. It’s a general conversational AI, and only at this point you will be able to pass the whole Turing Test,” he says of that aim.

“Because we have a human-level speech recognition, we have human-level speech synthesis, we have generative non-rule based behavior, and this is all the parts of this general conversational AI. And I think that we can we can — and scientific society — we can achieve this together in like 2024 or something like that.

“Then the next step, in 2025, this is like autonomous AI — embeddable in any device or a robot. And hopefully by 2025 these devices will be available on the market.”

Of course the team is still dreaming distance away from that AI wonderland/dystopia (depending on your perspective) — even if it’s date-stamped on the roadmap.

But if a conversational engine ends up in command of the full range of human speech — quirks, quibbles and all — then designing a voice AI may come to be thought of as akin to designing a TV character or cartoon personality. So very far from what we currently associate with the word “robotic.” (And wouldn’t it be funny if the term “robotic” came to mean “hyper entertaining” or even “especially empathetic” thanks to advances in AI.)

Let’s not get carried away though.

In the meantime, there are “uncanny valley” pitfalls of speech disconnect to navigate if the tone being (artificially) struck hits a false note. (And, on that front, if you didn’t know “John from Acme Dental” was a robot you’d be forgiven for misreading his chipper sign off to a total time waster as pure sarcasm. But an AI can’t appreciate irony. Not yet anyway.)

Nor can robots appreciate the difference between ethical and unethical verbal communication they’re being instructed to carry out. Sales calls can easily cross the line into spam. And what about even more dystopic uses for a conversation engine that’s so slick it can convince the vast majority of people it’s human — like fraud, identity theft, even election interference… the potential misuses could be terrible and scale endlessly.

Although if you straight out ask Dasha whether it’s a robot Chernyshov says it has been programmed to confess to being artificial. So it won’t tell you a barefaced lie.

Dasha

How will the team prevent problematic uses of such a powerful technology?

“We have an ethics framework and when we will be releasing the platform we will implement a real-time monitoring system that will monitor potential abuse or scams, and also it will ensure people are not being called too often,” he says. “This is very important. That we understand that this kind of technology can be potentially probably dangerous.”

“At the first stage we are not going to release it to all the public. We are going to release it in a closed alpha or beta. And we will be curating the companies that are going in to explore all the possible problems and prevent them from being massive problems,” he adds. “Our machine learning team are developing those algorithms for detecting abuse, spam and other use cases that we would like to prevent.”

There’s also the issue of verbal “deepfakes” to consider. Especially as Chernyshov suggests the platform will, in time, support cloning a voiceprint for use in the conversation — opening the door to making fake calls in someone else’s voice. Which sounds like a dream come true for scammers of all stripes. Or a way to really supercharge your top performing salesperson.

Safe to say, the counter technologies — and thoughtful regulation — are going to be very important.

There’s little doubt that AI will be regulated. In Europe policymakers have tasked themselves with coming up with a framework for ethical AI. And in the coming years policymakers in many countries will be trying to figure out how to put guardrails on a technology class that, in the consumer sphere, has already demonstrated its wrecking-ball potential — with the automated acceleration of spam, misinformation and political disinformation on social media platforms.

“We have to understand that at some point this kind of technologies will be definitely regulated by the state all over the world. And we as a platform we must comply with all of these requirements,” agrees Chernyshov, suggesting machine learning will also be able to identify whether a speaker is human or not — and that an official caller status could be baked into a telephony protocol so people aren’t left in the dark on the “bot or not” question. 

“It should be human-friendly. Don’t be evil, right?”

Asked whether he considers what will happen to the people working in call centers whose jobs will be disrupted by AI, Chernyshov is quick with the stock answer — that new technologies create jobs too, saying that’s been true right throughout human history. Though he concedes there may be a lag — while the old world catches up to the new.

Time and tide wait for no human, even when the change sounds increasingly like we do.

Jul
08
2019
--

Grasshopper’s Judith Erwin leaps into innovation banking

In the years following the financial crisis, de novo bank activity in the US slowed to a trickle. But as memories fade, the economy expands and the potential of tech-powered financial services marches forward, entrepreneurs have once again been asking the question, “Should I start a bank?”

And by bank, I’m not referring to a neobank, which sits on top of a bank, or a fintech startup that offers an interesting banking-like service of one kind or another. I mean a bank bank.

One of those entrepreneurs is Judith Erwin, a well-known business banking executive who was part of the founding team at Square 1 Bank, which was bought in 2015. Fast forward a few years and Erwin is back, this time as CEO of the cleverly named Grasshopper Bank in New York.

With over $130 million in capital raised from investors including Patriot Financial and T. Rowe Price Associates, Grasshopper has a notable amount of heft for a banking newbie. But as Erwin and her team seek to build share in the innovation banking market, she knows that she’ll need the capital as she navigates a hotly contested niche that has benefited from a robust start-up and venture capital environment.

Gregg Schoenberg: Good to see you, Judith. To jump right in, in my opinion, you were a key part of one of the most successful de novo banks in quite some time. You were responsible for VC relationships there, right?

…My background is one where people give me broken things, I fix them and give them back.

Judith Erwin: The VC relationships and the products and services managing the balance sheet around deposits. Those were my two primary roles, but my background is one where people give me broken things, I fix them and give them back.

Schoenberg: Square 1 was purchased for about 22 times earnings and 260% of tangible book, correct?

Erwin: Sounds accurate.

Schoenberg: Plus, the bank had a phenomenal earnings trajectory. Meanwhile, PacWest, which acquired you, was a “perfectly nice bank.” Would that be a fair characterization?

Erwin: Yes.

Schoenberg: Is part of the motivation to start Grasshopper to continue on a journey that maybe ended a little bit prematurely last time?

Erwin: That’s a great insight, and I did feel like we had sold too soon. It was a great deal for the investors — which included me — and so I understood it. But absolutely, a lot of what we’re working to do here are things I had hoped to do at Square 1.

Image via Getty Images / Classen Rafael / EyeEm

Schoenberg: You’re obviously aware of the 800-pound gorilla in the room in the form of Silicon Valley Bank . You’ve also got the megabanks that play in the segment, as well as Signature Bank, First Republic, Bridge Bank and others.

Jul
03
2019
--

Capital One CTO George Brady will join us at TC Sessions: Enterprise

When you think of old, giant mainframes that sit in the basement of a giant corporation, still doing the same work they did 30 years ago, chances are you’re thinking about a financial institution. It’s the financial enterprises, though, that are often leading the charge in bringing new technologies and software development practices to their employees and customers. That’s in part because they are in a period of disruption that forces them to become more nimble. Often, this means leaving behind legacy technology and embracing the cloud.

At TC Sessions: Enterprise, which is happening on September 5 in San Francisco, Capital One executive VP in charge of its technology operations, George Brady, will talk about the company’s journey from legacy hardware and software to embracing the cloud and open source, all while working in a highly regulated industry. Indeed, Capital One was among the first companies to embrace the Facebook-led Open Compute project and it’s a member of the Cloud Native Computing Foundation. It’s this transformation at Capital One that Brady is leading.

At our event, Brady will join a number of other distinguished panelists to specifically talk about his company’s journey to the cloud. There, Capital One is using serverless compute, for example, to power its Credit Offers API using AWS’s Lambda service, as well as a number of other cloud technologies.

Before joining Capital One as its CTO in 2014, Brady ran Fidelity Investment’s global enterprise infrastructure team from 2009 to 2014 and served as Goldman Sachs’ head of global business applications infrastructure before that.

Currently, he leads cloud application and platform productization for Capital One. Part of that portfolio is Critical Stack, a secure container orchestration platform for the enterprise. Capital One’s goal with this work is to help companies across industries become more compliant, secure and cost-effective operating in the public cloud.

Early-bird tickets are still on sale for $249; grab yours today before we sell out.

Student tickets are just $75 — grab them here.

Jun
12
2019
--

Apollo raises $22M for its GraphQL platform

Apollo, a San Francisco-based startup that provides a number of developer and operator tools and services around the GraphQL query language, today announced that it has raised a $22 million growth funding round co-led by Andreessen Horowitz and Matrix Partners. Existing investors Trinity Ventures and Webb Investment Network also participated in this round.

Today, Apollo is probably the biggest player in the GraphQL ecosystem. At its core, the company’s services allow businesses to use the Facebook -incubated GraphQL technology to shield their developers from the patchwork of legacy APIs and databases as they look to modernize their technology stacks. The team argues that while REST APIs that talked directly to other services and databases still made sense a few years ago, it doesn’t anymore now that the number of API endpoints keeps increasing rapidly.

Apollo replaces this with what it calls the Data Graph. “There is basically a missing piece where we think about how people build apps today, which is the piece that connects the billions of devices out there,” Apollo co-founder and CEO Geoff Schmidt told me. “You probably don’t just have one app anymore, you probably have three, for the web, iOS and Android . Or maybe six. And if you’re a two-sided marketplace you’ve got one for buyers, one for sellers and another for your ops team.”

Managing the interfaces between all of these apps quickly becomes complicated and means you have to write a lot of custom code for every new feature. The promise of the Data Graph is that developers can use GraphQL to query the data in the graph and move on, all without having to write the boilerplate code that typically slows them down. At the same time, the ops teams can use the Graph to enforce access policies and implement other security features.

“If you think about it, there’s a lot of analogies to what happened with relational databases in the ’80s,” Schmidt said. “There is a need for a new layer in the stack. Previously, your query planner was a human being, not a piece of software, and a relational database is a piece of software that would just give you a database. And you needed a way to query that database, and that syntax was called SQL.”

Geoff Schmidt, Apollo CEO, and Matt DeBergalis, CTO

GraphQL itself, of course, is open source. Apollo is now building a lot of the proprietary tools around this idea of the Data Graph that make it useful for businesses. There’s a cloud-hosted graph manager, for example, that lets you track your schema, as well as a dashboard to track performance, as well as integrations with continuous integration services. “It’s basically a set of services that keep track of the metadata about your graph and help you manage the configuration of your graph and all the workflows and processes around it,” Schmidt said.

The development of Apollo didn’t come out of nowhere. The founders previously launched Meteor, a framework and set of hosted services that allowed developers to write their apps in JavaScript, both on the front-end and back-end. Meteor was tightly coupled to MongoDB, though, which worked well for some use cases but also held the platform back in the long run. With Apollo, the team decided to go in the opposite direction and instead build a platform that makes being database agnostic the core of its value proposition.

The company also recently launched Apollo Federation, which makes it easier for businesses to work with a distributed graph. Sometimes, after all, your data lives in lots of different places. Federation allows for a distributed architecture that combines all of the different data sources into a single schema that developers can then query.

Schmidt tells me the company started to get some serious traction last year and by December, it was getting calls from VCs that heard from their portfolio companies that they were using Apollo.

The company plans to use the new funding to build out its technology to scale its field team to support the enterprises that bet on its technology, including the open-source technologies that power both the services.

“I see the Data Graph as a core new layer of the stack, just like we as an industry invested in the relational database for decades, making it better and better,” Schmidt said. “We’re still finding new uses for SQL and that relational database model. I think the Data Graph is going to be the same way.”

May
02
2019
--

Couchbase’s mobile database gets built-in ML and enhanced synchronization features

Couchbase, the company behind the eponymous NoSQL database, announced a major update to its mobile database today that brings some machine learning smarts, as well as improved synchronization features and enhanced stats and logging support, to the software.

“We’ve led the innovation and data management at the edge since the release of our mobile database five years ago,” Couchbase’s VP of Engineering Wayne Carter told me. “And we’re excited that others are doing that now. We feel that it’s very, very important for businesses to be able to utilize these emerging technologies that do sit on the edge to drive their businesses forward, and both making their employees more effective and their customer experience better.”

The latter part is what drove a lot of today’s updates, Carter noted. He also believes that the database is the right place to do some machine learning. So with this release, the company is adding predictive queries to its mobile database. This new API allows mobile apps to take pre-trained machine learning models and run predictive queries against the data that is stored locally. This would allow a retailer to create a tool that can use a phone’s camera to figure out what part a customer is looking for.

To support these predictive queries, Couchbase mobile is also getting support for predictive indexes. “Predictive indexes allow you to create an index on prediction, enabling correlation of real-time predictions with application data in milliseconds,” Carter said. In many ways, that’s also the unique value proposition for bringing machine learning into the database. “What you really need to do is you need to utilize the unique values of a database to be able to deliver the answer to those real-time questions within milliseconds,” explained Carter.

The other major new feature in this release is delta synchronization, which allows businesses to push far smaller updates to the databases on their employees’ mobile devices. That’s because they only have to receive the information that changed instead of a full updated database. Carter says this was a highly requested feature, but until now, the company always had to prioritize work on other components of Couchbase.

This is an especially useful feature for the company’s retail customers, a vertical where it has been quite successful. These users need to keep their catalogs up to data and quite a few of them supply their employees with mobile devices to help shoppers. Rumor has it that Apple, too, is a Couchbase user.

The update also includes a few new features that will be more of interest to operators, including advanced stats reporting and enhanced logging support.

Apr
02
2019
--

Pixeom raises $15M for its software-defined edge computing platform

Pixeom, a startup that offers a software-defined edge computing platform to enterprises, today announced that it has raised a $15 million funding round from Intel Capital, National Grid Partners and previous investor Samsung Catalyst Fund. The company plans to use the new funding to expand its go-to-market capacity and invest in product development.

If the Pixeom name sounds familiar, that may be because you remember it as a Raspberry Pi-based personal cloud platform. Indeed, that’s the service the company first launched back in 2014. It quickly pivoted to an enterprise model, though. As Pixeom CEO Sam Nagar told me, that pivot came about after a conversation the company had with Samsung about adopting its product for that company’s needs. In addition, it was also hard to find venture funding. The original Pixeom device allowed users to set up their own personal cloud storage and other applications at home. While there is surely a market for these devices, especially among privacy-conscious tech enthusiasts, it’s not massive, especially as users became more comfortable with storing their data in the cloud. “One of the major drivers [for the pivot] was that it was actually very difficult to get VC funding in an industry where the market trends were all skewing towards the cloud,” Nagar told me.

At the time of its launch, Pixeom also based its technology on OpenStack, the massive open-source project that helps enterprises manage their own data centers, which isn’t exactly known as a service that can easily be run on a single machine, let alone a low-powered one. Today, Pixeom uses containers to ship and manage its software on the edge.

What sets Pixeom apart from other edge computing platforms is that it can run on commodity hardware. There’s no need to buy a specific hardware configuration to run the software, unlike Microsoft’s Azure Stack or similar services. That makes it significantly more affordable to get started and allows potential customers to reuse some of their existing hardware investments.

Pixeom brands this capability as “software-defined edge computing” and there is clearly a market for this kind of service. While the company hasn’t made a lot of waves in the press, more than a dozen Fortune 500 companies now use its services. With that, the company now has revenues in the double-digit millions and its software manages more than a million devices worldwide.

As is so often the case in the enterprise software world, these clients don’t want to be named, but Nagar tells me they include one of the world’s largest fast food chains, for example, which uses the Pixeom platform in its stores.

On the software side, Pixeom is relatively cloud agnostic. One nifty feature of the platform is that it is API-compatible with Google Cloud Platform, AWS and Azure and offers an extensive subset of those platforms’ core storage and compute services, including a set of machine learning tools. Pixeom’s implementation may be different, but for an app, the edge endpoint on a Pixeom machine reacts the same way as its equivalent endpoint on AWS, for example.

Until now, Pixeom mostly financed its expansion — and the salary of its more than 90 employees — from its revenue. It only took a small funding round when it first launched the original device (together with a Kickstarter campaign). Technically, this new funding round is part of this, so depending on how you want to look at this, we’re either talking about a very large seed round or a Series A round.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com