Sep
19
2018
--

GitLab raises $100M

GitLab, the developer service that aims to offer a full lifecycle DevOps platform, today announced that it has raised a $100 million Series D funding round at a valuation of $1.1 billion. The round was led by Iconiq.

As GitLab CEO Sid Sijbrandij told me, this round, which brings the company’s total funding to $145.5 million, will help it enable its goal of reaching an IPO by November 2020.

According to Sijbrandij, GitLab’s original plan was to raise a new funding round at a valuation over $1 billion early next year. But since Iconiq came along with an offer that pretty much matched what the company set out to achieve in a few months anyway, the team decided to go ahead and raise the round now. Unsurprisingly, Microsoft’s acquisition of GitHub earlier this year helped to accelerate those plans, too.

“We weren’t planning on fundraising actually. I did block off some time in my calendar next year, starting from February 25th to do the next fundraise,” Sijbrandij said. “Our plan is to IPO in November of 2020 and we anticipated one more fundraise. I think in the current climate, where the macroeconomics are really good and GitHub got acquired, people are seeing that there’s one independent company, one startup left basically in this space. And we saw an opportunity to become best in class in a lot of categories.”

As Sijbrandij stressed, while most people still look at GitLab as a GitHub and Bitbucket competitor (and given the similarity in their names, who wouldn’t?), GitLab wants to be far more than that. It now offers products in nine categories and also sees itself as competing with the likes of VersionOne, Jira, Jenkins, Artifactory, Electric Cloud, Puppet, New Relic and BlackDuck.

“The biggest misunderstanding we’re seeing is that GitLab is an alternative to GitHub and we’ve grown beyond that,” he said. “We are now in nine categories all the way from planning to monitoring.”

Sijbrandij notes that there’s a billion-dollar player in every space that GitLab competes. “But we want to be better,” he said. “And that’s only possible because we are open core, so people co-create these products with us. That being said, there’s still a lot of work on our side, helping to get those contributions over the finish line, making sure performance and quality stay up, establish a consistent user interface. These are things that typically don’t come from the wider community and with this fundraise of $100 million, we will be able to make sure we can sustain that effort in all the different product categories.”

Given this focus, GitLab will invest most of the funding in its engineering efforts to build out its existing products but also to launch new ones. The company plans to launch new features like tracing and log aggregation, for example.

With this very public commitment to an IPO, GitLab is also signaling that it plans to stay independent. That’s very much Sijbrandij’s plan, at least, though he admitted that “there’s always a price” if somebody came along and wanted to acquire the company. He did note that he likes the transparency that comes with being a public company.

“We always managed to be more bullish about the company than the rest of the world,” he said. “But the rest of the world is starting to catch up. This fundraise is a statement that we now have the money to become a public company where we’re not we’re not interested in being acquired. That is what we’re setting out to do.”

Sep
12
2018
--

Nvidia launches the Tesla T4, its fastest data center inferencing platform yet

Nvidia today announced its new GPU for machine learning and inferencing in the data center. The new Tesla T4 GPUs (where the ‘T’ stands for Nvidia’s new Turing architecture) are the successors to the current batch of P4 GPUs that virtually every major cloud computing provider now offers. Google, Nvidia said, will be among the first to bring the new T4 GPUs to its Cloud Platform.

Nvidia argues that the T4s are significantly faster than the P4s. For language inferencing, for example, the T4 is 34 times faster than using a CPU and more than 3.5 times faster than the P4. Peak performance for the P4 is 260 TOPS for 4-bit integer operations and 65 TOPS for floating point operations. The T4 sits on a standard low-profile 75 watt PCI-e card.

What’s most important, though, is that Nvidia designed these chips specifically for AI inferencing. “What makes Tesla T4 such an efficient GPU for inferencing is the new Turing tensor core,” said Ian Buck, Nvidia’s VP and GM of its Tesla data center business. “[Nvidia CEO] Jensen [Huang] already talked about the Tensor core and what it can do for gaming and rendering and for AI, but for inferencing — that’s what it’s designed for.” In total, the chip features 320 Turing Tensor cores and 2,560 CUDA cores.

In addition to the new chip, Nvidia is also launching a refresh of its TensorRT software for optimizing deep learning models. This new version also includes the TensorRT inference server, a fully containerized microservice for data center inferencing that plugs seamlessly into an existing Kubernetes infrastructure.

 

 

Sep
11
2018
--

Twilio’s contact center products just got more analytical with Ytica acquisition

Twilio, a company best known for supplying a communications APIs for developers has a product called Twilio Flex for building sophisticated customer service applications on top of Twilio’s APIs. Today, it announced it was acquiring Ytica (pronounced Why-tica) to provide an operational and analytical layer on top of the customer service solution.

The companies would not discuss the purchase price, but Twilio indicated it does not expect the acquisition to have a material impact on its “results, operations or financial condition.” In other words, it probably didn’t cost much.

Ytica, which is based in Prague, has actually been a partner with Twilio for some time, so coming together in this fashion really made a lot of sense, especially as Twilio has been developing Flex.

Twilio Flex is an app platform for contact centers, which offers a full stack of applications and allows users to deliver customer support over multiple channels, Al Cook, general manager of Twilio Flex explained. “Flex deploys like SaaS, but because it’s built on top of APIs, you can reach in and change how Flex works,” he said. That is very appealing, especially for larger operations looking for a flexible, cloud-based solution without the baggage of on-prem legacy products.

What the product was lacking, however, was a native way to manage customer service representatives from within the application, and understand through analytics and dashboards, how well or poorly the team was doing. Having that ability to measure the effectiveness of the team becomes even more critical the larger the group becomes, and Cook indicated some Flex users are managing enormous groups with 10,000-20,000 employees.

Ytica provides a way to measure the performance of customer service staff, allowing management to monitor and intervene and coach when necessary. “It made so much sense to join together as one team. They have huge experience in the contact center, and a similar philosophy to build something customizable and programmable in the cloud,” Cook said.

While Ytica works with other vendors beyond Twilio, CEO Simon Vostrý says that they will continue to support those customers, even as they join the Twilio family. “We can run Flex and can continue to run this separately. We have customers running on other SaaS platforms, and we will continue to support them,” he said.

The company will remain in Prague and become a Twilio satellite office. All 14 employees are expected to join the Twilio team and Cook says plans are already in the works to expand the Prague team.

Sep
11
2018
--

Anaxi brings more visibility to the development process

Anaxi‘s mission is to bring more transparency to the software development process. The tool, which is now live for iOS, with web and Android versions planned for the near future, connects to GitHub to give you actionable insights about the state of your projects and manage your projects and issues. Support for Atlassian’s Jira is also in the works.

The new company was founded by former Apple engineering manager and Docker EVP of product development Marc Verstaen and former CodinGame CEO John Lafleur. Unsurprisingly, this new tool is all about fixing the issues these two have seen in their daily lives as developers.

“I’ve been doing software for 40 years,” Verstaen told me.” And every time is the same. You start with a small team and it’s fine. Then you grow and you don’t know what’s going on. It’s a black box.” While the rest of the business world now focuses on data and analytics, software development never quite reached that point. Verstaen argues that this was acceptable until 10 or 15 years ago because only software companies were doing software. But now that every company is becoming a software company, that’s not acceptable anymore.

Using Anaxi, you can easily see all issue reports and pull requests from your GitHub repositories, both public and private. But you also get visual status indicators that tell you when a project has too many blockers, for example, as well as the ability to define your own labels. You also can define due dates for issues.

One interesting aspect of Anaxi is that it doesn’t store all of this information on your phone or on a proprietary server. Instead, it only caches as little information as necessary (including your handles) and then pulls the rest of the information from GitHub as needed. That cache is encrypted on the phone, but for the most part, Anaxi simply relies on the GitHub API to pull in data when needed. There’s a bit of a trade-off here in terms of speed, but Verstaen noted that this also means you always get the most recent data and that GitHub’s API is quite fast and easy to work with.

The service is currently available for free. The company plans to introduce pricing plans in the future, with prices based on the number of developers that use the product inside a company.

Sep
10
2018
--

New Relic shifts with changing monitoring landscape

New Relic CEO Lew Cirne was feeling a bit nostalgic last week when he called to discuss the announcements for the company’s FutureStack conference taking place tomorrow in San Francisco. It had been 10 years since he first spoke to TechCrunch about his monitoring tool. A lot has changed in a decade including what his company is monitoring these days.

Cirne certainly recognizes that his company has come a long way since those first days. The monitoring world is going through a seismic shift as the ways we develop apps changes. His company needs to change with it to remain relevant in today’s market.

In the early days, they monitored Ruby on Rails applications, but gone are the days of only monitoring a fixed virtual machine. Today companies are using containers and Kubernetes, and beyond that, serverless architecture. Each of these approaches brings challenges to a monitoring company like New Relic, particularly the ephemeral nature and the sheer volume associated with these newer ways of working.

‘We think those changes have actually been an opportunity for us to further differentiate and further strengthen our thesis that the New Relic way is really the most logical way to address this.” He believes that his company has always been centered on the code, as opposed to the infrastructure where it’s delivered, and that has helped it make adjustments as the delivery mechanisms have changed.

Today, the company introduced a slew of new features and capabilities designed to keep the company oriented toward the changing needs of its customer base. One of the ways they are doing that is with a new feature called Outlier Detection, which has been designed to address changes in key metrics wherever your code happens to be deployed.

Further, Incident Context lets you see exactly where the incident occurred in the code so you don’t have to go hunting and pecking to find it in the sea of data.

Outlier Detection in action. Gif: New Relic

The company also introduced developer.newrelic.com, a site that extends the base APIs to provide a central place to build on top of the New Relic platform and give customers a way to extend the platform’s functionality. Cirne said each company has its own monitoring requirements, and they want to give them ability to build for any scenario.

In addition, they announced New Relic Query Language (NRQL) data, which leverages the New Relic GraphQL API to help deliver new kinds of customized, programmed capabilities to customers that aren’t available out of the box.”What if I could program New Relic to take action when a certain thing happens. When an application has a problem, it could post a notice to the status page or restart the service. You could automate something that has been historically done manually,” he explained.

Whatever the company is doing it appears to be working It went public in 2014 with an IPO share price of $30.14 and a market cap of $1.4 billion. Today, the share price was $103.65 with a market cap of $5.86 billion (as of publishing).

Sep
06
2018
--

PagerDuty raises $90M to wake up more engineers in the middle of the night

PagerDuty, the popular service that helps businesses monitor their tech stacks, manage incidents and alert engineers when things go sideways, today announced that it has raised a $90 million Series D round at a valuation of $1.3 billion. With this, PagerDuty, which was founded in 2009, has now raised well over $170 million.

The round was led by T. Rowe Price Associates and Wellington Management . Accel, Andreessen Horowitz and Bessemer Venture Partners participated. Given the leads in this round, chances are that PagerDuty is gearing up for an IPO.

“This capital infusion allows us to continue our investments in innovation that leverages artificial intelligence and machine learning, enabling us to help our customers transform their companies and delight their customers,” said Jennifer Tejada, CEO at PagerDuty in today’s announcement. “From a business standpoint, we can strengthen our investment in and development of our people, our most valuable asset, as we scale our operations globally. We’re well positioned to make the lives of digital workers better by elevating work to the outcomes that matter.”

Currently PagerDuty users include the likes of GE, Capital One, IBM, Spotify and virtually every other software company you’ve ever heard of. In total, more than 10,500 enterprises now use the service. While it’s best known for its alerting capabilities, PagerDuty has expanded well beyond that over the years, though it’s still a core part of its service. Earlier this year, for example, the company announced its new AIOps services that aim to help businesses reduce the amount of noisy and unnecessary alerts. I’m sure there’s a lot of engineers who are quite happy about that (and now sleep better).

Sep
05
2018
--

PoLTE lets you track devices using LTE signal

Meet PoLTE, a Dallas-based startup that wants to make location-tracking more efficient. Thanks to PoLTE’s software solution, logistics and shipment companies can much more easily track packages and goods. The startup is participating in TechCrunch’s Startup Battlefield at Disrupt SF.

If you want to use a connected device to track a package, you currently need a couple of things — a way to determine the location of the package, and a way to transmit this information over the air. The most straightforward way of doing it is by using a GPS chipset combined with a cellular chipset.

Systems-on-chip have made this easier as they usually integrate multiple modules. You can get a GPS signal and wireless capabilities in the same chip. While GPS is insanely accurate, it also requires a ton of battery just to position a device on a map. That’s why devices often triangulate your position using Wi-Fi combined with a database of Wi-Fi networks and their positions.

And yet, using GPS or Wi-Fi as well as an LTE modem doesn’t work if you want to track a container over multiple weeks or months. At some point, your device will run out of battery. Or you’ll have to spend a small fortune to buy a ton of trackers with big batteries.

PoLTE has developed a software solution that lets you turn data from the cell modem into location information. It works with existing modems and only requires a software update. The company has been working with Riot Micro for instance.

Behind the scene PoLTE’s magic happens on their servers. IoT devices don’t need to do any of the computing. They just need to send a tiny sample of LTE signals and PoLTE can figure out the location from their servers. Customers can then get this data using an API.

It only takes 300 bytes of data to get location information with precision of less than a few meters. You don’t need a powerful CPU, Wi-Fi, GPS or Bluetooth.

“We offer 80 percent cost reduction on IoT devices together with longer battery life,” CEO Ed Chao told me.

On the business side, PoLTE is using a software-as-a-service model. You can get started for free if you don’t need a lot of API calls. You then start paying depending on the size of your fleet of devices and the number of location requests.

It doesn’t really matter if the company finds a good business opportunity. PoLTE is a low-level technology company at heart. Its solution is interesting by itself and could help bigger companies that are looking for an efficient location-tracking solution.


Sep
04
2018
--

Atlassian acquires OpsGenie, launches Jira Ops for managing incidents

Atlassian today announced the first beta of a new edition of its flagship Jira project and issue tracking tool that is meant to help ops teams handle incidents faster and more efficiently.

Jira Ops integrates with tools like OpsGenie, PagerDuty, xMatters, Statuspage, Slack and others. Many teams already use these tools when their services go down, but Atlassian argues that most companies currently use a rather ad hoc approach to working with them. Jira Ops aims to be the glue that keeps everybody on the same page and provides visibility into ongoing incidents.

Update: after Atlassian announced Jira Ops, it also announced that it has acquired OpsGenie for $295 million.

This is obviously not the first time Atlassian is using Jira to branch out from its core developer audience. Jira Service Desk and Jira Core, for example, aim at a far broader audience. Ops, however, goes after a very specific vertical.

“Service Desk was the first step,” Jens Schumacher, Head of Software Teams at Atlassian, told me. And we were looking at what are the other verticals that we can attack with Jira.” Schumacher also noted that Atlassian built a lot of tools for its internal ops teams over the years to glue together all the different pieces that are necessary to track and manage incidents. With Jira Ops, the company is essentially turning its own playbook into a product.

In a way, though, using Jira Ops adds yet another piece to the puzzle. Schumacher, however, argues that the idea here is to have a single place to manage the process. “The is that when an incident happens, you have a central place where you can go, where you can find out everything about the incident,” he said. “You can see who has been paged and alerted; you can alert more people if you need to right from there; you know what Slack channel the incident is being discussed in.”

Unlike some of Atlassian’s other products, the company doesn’t currently have any plans to launch a self-hosted version of Jira Ops. The argument here is pretty straightforward: if your infrastructure goes down, then Jira Opes could also go do down — and then you don’t have a tool for managing that downtime.

Jira Ops is now available for free for early access beta users. The company expects to launch version 1.0 in early 2019. By then Atlassian will surely also have figured out a pricing plan, something it didn’t announce today.

Aug
30
2018
--

InVision deepens integrations with Atlassian

InVision today announced a newly expanded integration and strategic partnership with Atlassian that will let users of Confluence, Trello and Jira see and share InVision prototypes from within those programs.

Atlassian’s product suite is built around making product teams faster and more efficient. These tools streamline and organize communication so developers and designers can focus on getting the job done. Meanwhile, InVision’s collaboration platform has caught on to the idea that design is now a team sport, letting designers, engineers, executives and other shareholders be involved in the design process right from the get-go.

Specifically, the expanded integration allows designers to share InVision Studio designs and prototypes right within Jira, Trello and Confluence. InVision Studio was unveiled late last year, offering designers an alternative to Sketch and Adobe.

Given the way design and development teams use both product suites, it only makes sense to let these product suites communicate with one another.

As part of the partnership, Atlassian has also made a strategic financial investment in InVision, though the companies declined to share the amount.

Here’s what InVision CEO Clark Valberg had to say about it in a prepared statement:

In today’s digital world creating delightful, highly effective customer experiences has become a central business imperative for every company in the world. InVision and Atlassian represent the essential platforms for organizations looking to unleash the potential of their design and development teams. We’re looking forward to all the opportunities to deepen our relationship on both a product and strategic basis, and build toward a more cohesive digital product operating system that enables every organization to build better products, faster.

InVision has been working to position itself as the Salesforce of the design world. Alongside InVision and InVision Studio, the company has also built out an asset and app store, as well as launched a small fund to invest in design startups. In short, InVision wants the design ecosystem to revolve around it.

Considering that InVision has raised more than $200 million, and serves 4 million users, including 80 percent of the Fortune 500, it would seem that the strategy is paying off.

Aug
29
2018
--

Google takes a step back from running the Kubernetes development infrastructure

Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community. These credits will be split over three years and are meant to cover the infrastructure costs of building, testing and distributing the Kubernetes software.

Why does this matter? Until now, Google hosted virtually all the cloud resources that supported the project, like its CI/CD testing infrastructure, container downloads and DNS services on its cloud. But Google is now taking a step back. With the Kubernetes community reaching a state of maturity, Google is transferring all of this to the community.

Between the testing infrastructure and hosting container downloads, the Kubernetes project regularly runs more than 150,000 containers on 5,000 virtual machines, so the cost of running these systems quickly adds up. The Kubernetes container registry has served almost 130 million downloads since the launch of the project.

It’s also worth noting that the CNCF now includes a wide range of members that typically compete with each other. We’re talking Alibaba Cloud, AWS, Microsoft Azure, Google Cloud, IBM Cloud, Oracle, SAP and VMware, for example. All of these profit from the work of the CNCF and the Kubernetes community. Google doesn’t say so outright, but it’s fair to assume that it wanted others to shoulder some of the burdens of running the Kubernetes infrastructure, too. Similarly, some of the members of the community surely didn’t want to be so closely tied to Google’s infrastructure, either.

“By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project operations,” Google Kubernetes Engine product manager William Deniss writes in today’s announcement. He also notes that a number of Google’s will still be involved in running the Kubernetes infrastructure.

“Google’s significant financial donation to the Kubernetes community will help ensure that the project’s constant pace of innovation and broad adoption continue unabated,” said Dan Kohn, the executive director of the CNCF. “We’re thrilled to see Google Cloud transfer management of the Kubernetes testing and infrastructure projects into contributors’ hands — making the project not just open source, but openly managed, by an open community.”

It’s unclear whether the project plans to take some of the Google-hosted infrastructure and move it to another cloud, but it could definitely do so — and other cloud providers could step up and offer similar credits, too.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com