Jul
02
2020
--

QuestDB nabs $2.3M seed to build open source time series database

QuestDB, a member of the Y Combinator summer 2020 cohort, is building an open source time series database with speed top of mind. Today the startup announced a $2.3 million seed round.

Episode1 Ventures led the round with assistance from Seedcamp, 7percent Ventures, YCombinator, Kima Ventures and several unnamed angel investors.

The database was originally conceived in 2013 when current CTO Vlad Ilyushchenko was building trading systems for a financial services company and he was frustrated by the performance limitations of the databases available at the time, so he began building a database that could handle large amounts of data and process it extremely fast.

For a number of years, QuestDB was a side project, a labor of love for Ilyushchenko until he met his other co-founders Nicolas Hourcard, who became CEO and Tancrede Collard, who became CPO, and the three decided to build a startup on top of the open source project last year.

“We’re building an open source database for time series data, and time series databases are a multi-billion-dollar market because they’re central for financial services, IoT and other enterprise applications. And we basically make it easy to handle explosive amounts of data, and to reduce infrastructure costs massively,” Hourcard told TechCrunch.

He adds that it’s also about high performance. “We recently released a demo that you can access from our website that enables you to query a super large datasets — 1.6 billion rows with sub-second queries, mostly, and that just illustrates how performant the software is,” he said.

He sees open source as a way to build adoption from the bottom up inside organizations, winning the hearts and minds of developers first, then moving deeper in the company when they eventually build a managed cloud version of the product. For now, being open source also helps them as a small team to have a community of contributors help build the database and add to its feature set.

“We’ve got this open source product that is free to use, and it’s pretty important for us to have such a distribution model because we can basically empower developers to solve their problems, and we can ask for contributions from various communities. […] And this is really a way to spur adoption,” Hourcard said.

He says that working with YC has allowed them to talk to other companies in the ecosystem who have built similar open source-based startups and that’s been helpful, but it has also helped them learn to set and meet goals and have access to some of the biggest names in Silicon Valley, including Marc Andreessen, who delivered a talk to the cohort the same day we spoke.

Today the company has seven employees, including the three founders, spread out across the US, EU and South America. He sees this geographic diversity helping when it comes to building a diverse team in the future. “We definitely want to have more diverse backgrounds to make sure that we keep having a diverse team and we’re very strongly committed to that.”

For the short term, the company wants to continue building its community, working on continuing to improve the open source product, while working on the managed cloud product.

Jul
01
2020
--

Vendia raises $5.1M for its multicloud serverless platform

When the inventor of AWS Lambda, Tim Wagner, and the former head of blockchain at AWS, Shruthi Rao, co-found a startup, it’s probably worth paying attention. Vendia, as the new venture is called, combines the best of serverless and blockchain to help build a truly multicloud serverless platform for better data and code sharing.

Today, the Vendia team announced that it has raised a $5.1 million seed funding round, led by Neotribe’s Swaroop ‘Kittu’ Kolluri. Correlation Ventures, WestWave Capital, HWVP, Firebolt Ventures, Floodgate and Future\Perfect Ventures also participated in this oversubscribed round.

Image Credits: Vendia

Seeing Wagner at the helm of a blockchain-centric startup isn’t exactly a surprise. After building Lambda at AWS, he spent some time as VP of engineering at Coinbase, where he left about a year ago to build Vendia.

“One day, Coinbase approached me and said, ‘Hey, maybe we could do for the financial system what you’ve been doing over there for the cloud system,’” he told me. “And so I got interested in that. We had some conversations. I ended up going to Coinbase and spent a little over a year there as the VP of Engineering, helping them to set the stage for some of that platform work and tripling the size of the team.” He noted that Coinbase may be one of the few companies where distributed ledgers are actually mission-critical to their business, yet even Coinbase had a hard time scaling its Ethereum fleet, for example, and there was no cloud-based service available to help it do so.

Tim Wagner, Vendia co-founder and CEO. Image Credits: Vendia

“The thing that came to me as I was working there was why don’t we bring these two things together? Nobody’s thinking about how would you build a distributed ledger or blockchain as if it were a cloud service, with all the things that we’ve learned over the course of the last 10 years building out the public cloud and learning how to do it at scale,” he said.

Wagner then joined forces with Rao, who spent a lot of time in her role at AWS talking to blockchain customers. One thing she noticed was that while it makes a lot of sense to use blockchain to establish trust in a public setting, that’s really not an issue for enterprise.

“After the 500th customer, it started to make sense,” she said. “These customers had made quite a bit of investment in IoT and edge devices. They were gathering massive amounts of data. They also made investments on the other side, with AI and ML and analytics. And they said, ‘Well, there’s a lot of data and I want to push all of this data through these intelligent systems. I need a mechanism to get this data.’” But the majority of that data often comes from third-party services. At the same time, most blockchain proof of concepts weren’t moving into any real production usage because the process was often far too complex, especially enterprises that maybe wanted to connect their systems to those of their partners.

Shruthi Rao, Vendia co-founder and CBO. Image Credits: Vendia

“We are asking these partners to spin up Kubernetes clusters and install blockchain nodes. Why is that? That’s because for blockchain to bring trust into a system to ensure trust, you have to own your own data. And to own your own data, you need your own node. So we’re solving fundamentally the wrong problem,” she explained.

The first product Vendia is bringing to market is Vendia Share, a way for businesses to share data with partners (and across clouds) in real-time, all without giving up control over that data. As Wagner noted, businesses often want to share large data sets but they also want to ensure they can control who has access to that data. For those users, Vendia is essentially a virtual data lake with provenance tracking and tamper-proofing built in.

The company, which mostly raised this round after the coronavirus pandemic took hold in the U.S., is already working with a couple of design partners in multiple industries to test out its ideas, and plans to use the new funding to expand its engineering team to build out its tools.

“At Neotribe Ventures, we invest in breakthrough technologies that stretch the imagination and partner with companies that have category creation potential built upon a deep-tech platform,” said Neotribe founder and managing director Kolluri. “When we heard the Vendia story, it was a no-brainer for us. The size of the market for multiparty, multicloud data and code aggregation is enormous and only grows larger as companies capture every last bit of data. Vendia’s serverless-based technology offers benefits such as ease of experimentation, no operational heavy lifting and a pay-as-you-go pricing model, making it both very consumable and highly disruptive. Given both Tim and Shruthi’s backgrounds, we know we’ve found an ideal ‘Founder fit’ to solve this problem! We are very excited to be the lead investors and be a part of their journey.”

Jul
01
2020
--

Fauna raises an additional $27M to turn databases into a simple API call

Databases have always been a complex part of the equation for developers requiring a delicate balance to manage inside the application, but Fauna wants to make adding a database a simple API call, and today it announced $27 million in new funding.

The round, which is technically an extension of the company’s 2017 Series A, was led by Madrona Venture Group with participation from Addition, GV, CRV, Quest Ventures and a number of individual investors. Today’s investment brings the total raised to $57 million, according to the company.

While it was at it, the company also added some executive fire power, announcing that it was bringing on former Okta chief product officer Eric Berg as CEO and former Snowflake CEO Bob Muglia as Chairman.

Companies like Stripe for payments and Twilio for communications are the poster children for the move to APIs. Instead of building sophisticated functionality from scratch, a developer can use an API call to a service, and presto, has the tooling built in without any fuss. Fauna does the same thing for databases.

“Within a few lines of code with Fauna, developers can add a full-featured globally distributed database to their applications. They can simplify code, reduce costs and ship faster because they never again worry about database issues such as correctness, capacity, scalability, replication, etc,” new CEO Berg told TechCrunch.

To automate the process even further, the database is serverless, meaning that it scales up or down automatically to meet the needs of the application. Company co-founder Evan Weaver, who has moved to CTO with the hiring of Berg, says that Stripe is a good example of how this works. “You don’t think about provisioning Stripe because you don’t have to. […] You sign up for an account and beyond that you don’t have to provision or operate anything,” Weaver explained.

Like most API companies, it’s working at the developer level to build community and developer consensus around it. Today, they have 25,000 developers using the tool. While they don’t have an open-source version, they try to attract developer interest with a generous free tier, after which you can pay as you go or set up a fixed monthly pricing as you scale up.

The company has always been 100% remote, so when COVID hit, it didn’t really change anything about the way the company’s 40 employees work. As the company grows, Berg says it has aggressive goals around diversity and inclusion.

“Our recruiting and HR team have some pretty aggressive targets in terms of thinking about diversity in our pipelines and in our recruiting efforts, and because we’re a small team today we have the ability to impact that as we grow. If we doubled the size of the company, we could shift those percentages pretty dramatically, so it’s something that is definitely top of mind for us.”

Weaver says that fundraising began at the beginning of this year before COVID hit, but the term sheet wasn’t signed until March. He admits being nervous throughout the process, especially as the pandemic took hold. A company like Fauna is highly technical and takes time to grow, and he worried getting investors to understand that, even without a bleak economic picture, was challenging.

“It’s a deep tech business and it takes real capital to grow and scale. It’s a high-risk, high-reward bet, which is easier to fund in boom times, but broadly I think the best companies get built during recessions when there’s less competition for talent and there’s more focus on capital.”

Jun
30
2020
--

Kong donates its Kuma control plane to the Cloud Native Computing Foundation

API management platform Kong today announced that it is donating its open-source Kuma control plane technology to the Cloud Native Computing Foundation (CNCF). Since Kong built Kuma on top of the Envoy service mesh — and Envoy is part of the CNCF’s stable of open-source projects — donating it to this specific foundation was likely an obvious move.

The company first open-sourced Kuma in September 2019. In addition to donating it to the CNCF, the company also today launched version 0.6 of the codebase, which introduces a new hybrid mode that enables Kuma-based service meshes to support applications that run on complex heterogeneous environments, including VMs, Kubernetes clusters and multiple data centers.

Image Credits: Kong

Kong co-founder and CTO Marco Palladino says that the goal was always to donate Kuma to the CNCF.

“The industry needs and deserves to have a cloud native, Envoy-based control plane that is open and not governed by a single commercial entity,” he writes in today’s announcement. “From a technology standpoint, it makes no sense for individual companies to create their own control plane but rather build their own unique applications on proven technologies like Envoy and Kuma. We welcome the broader community to join Kuma on Slack and on our bi-weekly community calls to contribute to the project and continue the incredible momentum we have achieved so far.”

Kuma will become a CNCF Sandbox project. The sandbox is the first stage that projects go through to become full graduated CNCF projects. Currently, the foundation is home to 31 sandbox projects, and Kong argues that Kuma is now production-ready and at the right stage where it can profit from the overall CNCF ecosystem.

“It’s truly remarkable to see the ecosystem around Envoy continue to develop, and as a vendor-neutral organization, CNCF is the ideal home for Kuma,” said Matt Klein, the creator of the Envoy proxy. “Now developers have access to the service mesh data plane they love with Envoy as well as a CNCF-hosted Envoy-based control plane with Kuma, offering a powerful combination to make it easier to create and manage cloud native applications.”

Jun
29
2020
--

CodeGuru, AWS’s AI code reviewer and performance profiler, is now generally available

AWS today announced that CodeGuru, a set of tools that use machine learning to automatically review code for bugs and suggest potential optimizations, is now generally available. The tool launched into preview at AWS re:Invent last December.

CodeGuru consists of two tools, Reviewer and Profiler, and those names pretty much describe exactly what they do. To build Reviewer, the AWS team actually trained its algorithm with the help of code from more than 10,000 open source projects on GitHub, as well as reviews from Amazon’s own internal codebase.

“Even for a large organization like Amazon, it’s challenging to have enough experienced developers with enough free time to do code reviews, given the amount of code that gets written every day,” the company notes in today’s announcement. “And even the most experienced reviewers miss problems before they impact customer-facing applications, resulting in bugs and performance issues.”

To use CodeGuru, developers continue to commit their code to their repository of choice, no matter whether that’s GitHub, Bitbucket Cloud, AWS’s own CodeCommit or another service. CodeGuru Reviewer then analyzes that code, tries to find bugs and, if it does, it will also offer potential fixes. All of this is done within the context of the code repository, so CodeGuru will create a GitHub pull request, for example, and add a comment to that pull request with some more info about the bug and potential fixes.

To train the machine learning model, users can also provide CodeGuru with some basic feedback, though we’re mostly talking “thumbs up” and “thumbs down” here.

The CodeGuru Application Profiler has a somewhat different mission. It is meant to help developers figure out where there might be some inefficiencies in their code and identify the most expensive lines of code. This includes support for serverless platforms like AWS Lambda and Fargate.

One feature the team added since it first announced CodeGuru is that Profiler now attaches an estimated dollar amount to the lines of unoptimized code.

“Our customers develop and run a lot of applications that include millions and millions of lines of code. Ensuring the quality and efficiency of that code is incredibly important, as bugs and inefficiencies in even a few lines of code can be very costly. Today, the methods for identifying code quality issues are time-consuming, manual, and error-prone, especially at scale,” said Swami Sivasubramanian, vice president, Amazon Machine Learning, in today’s announcement. “CodeGuru combines Amazon’s decades of experience developing and deploying applications at scale with considerable machine learning expertise to give customers a service that improves software quality, delights their customers with better application performance, and eliminates their most expensive lines of code.”

AWS says a number of companies started using CodeGuru during the preview period. These include the likes of Atlassian, EagleDream and DevFactory.

“While code reviews from our development team do a great job of preventing bugs from reaching production, it’s not always possible to predict how systems will behave under stress or manage complex data shapes, especially as we have multiple deployments per day,” said Zak Islam, head of Engineering, Tech Teams, at Atlassian. “When we detect anomalies in production, we have been able to reduce the investigation time from days to hours and sometimes minutes thanks to Amazon CodeGuru’s continuous profiling feature. Our developers now focus more of their energy on delivering differentiated capabilities and less time investigating problems in our production environment.”

Image Credits: AWS

Jun
26
2020
--

CIO Cynthia Stoddard explains Adobe’s journey from boxes to the cloud

Up until 2013, Adobe sold its software in cardboard boxes that were distributed mostly by third party vendors.

In time, the company realized there were a number of problems with that approach. For starters, it took months or years to update, and Adobe software was so costly, much of its user base didn’t upgrade. But perhaps even more important than the revenue/development gap was the fact that Adobe had no direct connection to the people who purchased its products.

By abdicating sales to others, Adobe’s customers were third-party resellers, but changing the distribution system also meant transforming the way the company developed and sold their most lucrative products.

The shift was a bold move that has paid off handsomely as the company surpassed an $11 billion annual run rate in December — but it still was an enormous risk at the time. We spoke to Adobe CIO Cynthia Stoddard to learn more about what it took to completely transform the way they did business.

Understanding the customer

Before Adobe could make the switch to selling software as a cloud service subscription, it needed a mechanism for doing that, and that involved completely repurposing their web site, Adobe.com, which at the time was a purely informational site.

“So when you think about transformation the first transformation was how do we connect and sell and how do we transition from this large network of third parties into selling direct to consumer with a commerce site that needed to be up 24×7,” Stoddard explained.

She didn’t stop there though because they weren’t just abandoning the entire distribution network that was in place. In the new cloud model, they still have a healthy network of partners and they had to set up the new system to accommodate them alongside individual and business customers.

She says one of the keys to managing a set of changes this immense was that they didn’t try to do everything at once. “One of the things we didn’t do was say, ‘We’re going to move to the cloud, let’s throw everything away.’ What we actually did is say we’re going to move to the cloud, so let’s iterate and figure out what’s working and not working. Then we could change how we interact with customers, and then we could change the reporting, back office systems and everything else in a very agile manner,” she said.

Jun
24
2020
--

AWS launches Amazon Honeycode, a no-code mobile and web app builder

AWS today announced the beta launch of Amazon Honeycode, a new, fully managed low-code/no-code development tool that aims to make it easy for anybody in a company to build their own applications. All of this, of course, is backed by a database in AWS and a web-based, drag-and-drop interface builder.

Developers can build applications for up to 20 users for free. After that, they pay per user and for the storage their applications take up.

Image Credits: Amazon/AWS

“Customers have told us that the need for custom applications far outstrips the capacity of developers to create them,” said AWS VP Larry Augustin in the announcement. “Now with Amazon Honeycode, almost anyone can create powerful custom mobile and web applications without the need to write code.”

Like similar tools, Honeycode provides users with a set of templates for common use cases like to-do list applications, customer trackers, surveys, schedules and inventory management. Traditionally, AWS argues, a lot of businesses have relied on shared spreadsheets to do these things.

“Customers try to solve for the static nature of spreadsheets by emailing them back and forth, but all of the emailing just compounds the inefficiency because email is slow, doesn’t scale, and introduces versioning and data syncing errors,” the company notes in today’s announcement. “As a result, people often prefer having custom applications built, but the demand for custom programming often outstrips developer capacity, creating a situation where teams either need to wait for developers to free up or have to hire expensive consultants to build applications.”

It’s no surprise then that Honeycode uses a spreadsheet view as its core data interface, which makes sense, given how familiar virtually every potential user is with this concept. To manipulate data, users can work with standard spreadsheet-style formulas, which seems to be about the closest the service gets to actual programming. ‘Builders,” as AWS calls Honeycode users, can also set up notifications, reminders and approval workflows within the service.

AWS says these databases can easily scale up to 100,000 rows per workbook. With this, AWS argues, users can then focus on building their applications without having to worry about the underlying infrastructure.

As of now, it doesn’t look like users will be able to bring in any outside data sources, though that may still be on the company’s roadmap. On the other hand, these kinds of integrations would also complicate the process of building an app and it looks like AWS is trying to keep things simple for now.

Honeycode currently only runs in the AWS US West region in Oregon but is coming to other regions soon.

Among Honeycode’s first customers are SmugMug and Slack.

“We’re excited about the opportunity that Amazon Honeycode creates for teams to build apps to drive and adapt to today’s ever-changing business landscape,” said Brad Armstrong, VP of Business and Corporate Development at Slack in today’s release. “We see Amazon Honeycode as a great complement and extension to Slack and are excited about the opportunity to work together to create ways for our joint customers to work more efficiently and to do more with their data than ever before.”

Jun
24
2020
--

Cloud Foundry gets an updated CLI to make life easier for enterprise developers

The Cloud Foundry Foundation, the nonprofit behind the popular open-source enterprise platform-as-a-service project, is holding its developer conference today. What’s usually a bi-annual community gathering (traditionally one in Europe and one in North America) is now a virtual event, but there’s still plenty of news from the Summit, both from the organization itself and from the wider ecosystem.

After going through a number of challenging technical changes in order to adapt to the new world of containers and DevOps, the organization’s focus these days is squarely on improving the developer experience around Cloud Foundry (CF). The promise of CF, after all, has always been that it would make life easier for enterprise developers (assuming they follow the overall CF processes).

“There are really two areas of focus that our community has: number one, re-platform on Kubernetes. No major announcements about that. […] And then the secondary focus is continuing to evolve our developer experience,” Chip Childers, the executive director of the Cloud Foundry Foundation, told me ahead of today’s announcements.

At the core of the CF experience is its “cf” command-line interface(CLI). With today’s update, this is getting a number of new capabilities, mostly with an eye to giving developers more flexibility to support their own workflows.

“The cf CLI v7 was made possible through the tremendous work of a diverse, distributed group of collaborators and committers,” said Josh Collins, Cloud Foundry’s CLI project lead and senior product manager at VMware. “Modern development techniques are much simpler with Cloud Foundry as a result of the new CLI, which abstracts away the nuances of the CF API into a command-line interface that’s easy and elegant to use.”

Built on top of CF’s v3 APIs, which have been in the making for a while, the new CLI enables features like rolling app deployments for example, to allow developers to push updates without downtime. “Let’s say you have a number of instances of the application out there and you want to slowly roll instance by instance to perform the upgrade and allow traffic to be spread across both new and old versions,” explained Childers. “Being able to do that with just a simple command is a very powerful thing.”

Developers can also now run sub-steps of their “cf -push” processes. With this, they get more granular control over their deployments (“cf -push” is the command for deploying a CF application) and they now get the ability to push apps that run multiple processes, maybe for a UI process and a worker process.

In the overall Cloud Foundry ecosystem, things continue at their regular pace, with EngineerBetter, for example, joining the Cloud Foundry Foundation as a new member, Suse updating its Cloud Application Platform and long-time CF backers like anynines, Atos and Grape Up updating their respective CF-centric platforms, too. Stark & Wayne, which has long offered a managed CF solution, too, is launching new support options with the addition of college-style advisory sessions and an update to its Kubernetes-centric Gluon controller for CF deployments.

Jun
24
2020
--

Suse launches version 2.0 of its Cloud Foundry-based Cloud Application Platform

Suse, the well-known German open-source company that went through more corporate owners than anybody can remember until it finally became independent again in 2019, has long been a champion of Cloud Foundry, the open-source platform-as-a-service project. And while you may think of Suse as a Linux distribution, today’s company also offers a number of other services, including a container platform, DevOps tools and the Suse Cloud Application Platform, based on Cloud Foundry. Today, right in time for the bi-annual (and now virtual) Cloud Foundry Summit, the company announced the launch of version 2.0 of this platform.

The promise of the Application Platform, and indeed Cloud Foundry, is that it allows for one-step application deployments and an enterprise-ready platform to host them.

The marquee feature of version 2.0 is that it now includes a new Kubernetes Operator, a standard way of packaging, deploying and managing container-based applications, which makes deploying and managing Cloud Foundry on Kubernetes infrastructure easier.

Suse President of Engineering and Innovation Thomas Di Giacomo also notes that it’s now easier to “install, operate and maintain on Kubernetes platforms anywhere — on premises and in public clouds,” and that it opens up a new path for existing Cloud Foundry users to move to a modern container-based architecture. Indeed, for the last few years, Suse has been crucial to bringing both Kubernetes support to Cloud Foundry and Cloud Foundry to Kubernetes.

Cloud Foundry, it’s worth noting, long used its home-grown container orchestration tool, which the community developed before anybody had even heard of Kubernetes. Over the course of the last few years, though, Kubernetes became the de facto standard for container management, and today, Cloud Foundry supports both its own Diego tool and Kubernetes.

Suse Cloud Application Platform 2.0 builds on and advances those efforts, incorporating several upstream technologies recently contributed by Suse to the Cloud Foundry Community,” writes Di Giacomo. “These include KubeCF, a containerized version of the Cloud Foundry Application Runtime designed to run on Kubernetes, and Project Quarks, a Kubernetes operator for automating deployment and management of Cloud Foundry on Kubernetes.”

Jun
24
2020
--

Lightrun raises $4M for its continuous debugging and observability platform

Lightrun, a Tel Aviv-based startup that makes it easier for developers to debug their production code, today announced that it has raised a $4 million seed round led by Glilot Capital Partners, with participation from a number of engineering executives from several Fortune 500 firms.

The company was co-founded by Ilan Peleg (who, in a previous life, was a competitive 800m runner) and Leonid Blouvshtein, with Peleg taking the CEO role and Blouvshtein the CTO position.

The overall idea behind Lightrun is that it’s too hard for developers to debug their production code. “In today’s world, whenever a developer issues a new software version and deploys it into production, the only way to understand the application’s behavior is based on log lines or metrics which were defined during the development stage,” Peleg explained. “The thing is, that is simply not enough. We’ve all encountered cases of missing a very specific log line when trying to troubleshoot production issues, then having to release a new hotfix version in order to add this specific logline, or — alternatively — reproduce the bug locally to better understand the application’s behavior.”

Image Credits: Lightrun

With Lightrun, as the co-founders showed me in a demo, developers can easily add new logs and metrics to their code from their IDE and then receive real-time data from their real production or development environments. For that to work, they need to have the Lightrun agent installed, but the overhead here is generally low because the agent sits idle until it is needed. In the IDE, the experience isn’t all that different from setting a traditional breakpoint in a debugger — only that there is no break. Lightrun can also use existing logging tools like Datadog to pipe its logging data to them.

While the service’s agent is agnostic about the environment it runs in, the company currently only supports JVM languages. Blouvshtein noted that building JVM language support was likely harder than building support for other languages and the company plans to launch support for more languages in the future.

“We make a point of investing in technologies that transform big industries,” said Kobi Samboursky, founder and managing partner at Glilot Capital Partners . “Lightrun is spearheading Continuous Debugging and Continuous Observability, picking up where CI/CD ends, turning observability into a real-time process instead of the iterative process it is today. We’re confident that this will become DevOps and development best practices, enabling I&O leaders to react faster to production issues.”

For now, there is still a bit of an onboarding process to get started with Lightrun, though that’s generally a very short process, the team tells me. Over time, the company plans to make this a self-service process. At that point, Lightrun will likely also become more interesting to smaller teams and individual developers, though the company is mostly focused on enterprise users and, despite only really launching out of stealth today and offering limited language support, the company already has a number of paying customers, including major enterprises.

“Our strategy is based on two approaches: bottom-up and top-down. Bottom-up, we’re targeting developers, they are the end-users and we want to ensure they get a quality product they can trust to help them. We put a lot of effort into reaching out through the developer channels and communities, as well as enabling usage and getting feedback. […] Top-down approach, we are approaching R&D management like VP of R&D, R&D directors in bigger companies and then we show them how Lightrun saves company development resources and improves customer satisfaction.”

Unsurprisingly, the company, which currently has about a dozen employees, plans to use the new funding to add support for more languages and improve its service with new features, including support for tracing.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com