Nov
02
2020
--

Coupa Software snags Llamasoft for $1.5B to bring together spending and supply chain data

Coupa Software, a publicly traded company that helps large corporations manage spending, announced that it was buying Llamasoft, an 18-year-old Michigan company that helps large companies manage their supply chain. The deal was pegged at $1.5 billion.

This year Llamasoft released its latest tool, an AI-driven platform for managing supply chains intelligently. This capability in particular seemed to attract Coupa’s attention, as it was looking for a supply chain application to complement its spend management capabilities.

Coupa CEO and chairman Rob Bernshteyn says when you combine that supply chain data with Coupa’s spending data, it can produce a powerful combination.

“Llamasoft’s deep supply chain expertise and sophisticated data science and modeling capabilities, combined with the roughly $2 trillion of cumulative transactional spend data we have in Coupa, will empower businesses with the intelligence needed to pivot on a dime,” Bernshteyn said in a statement.

The purchase comes at a time when companies are focusing more and more on digitizing processes across enterprise, and when supply chains can be uncertain, depending on the location of COVID hotspots at any particular time.

“With demand uncertainty on one hand, and supply volatility on the other, companies are in need of supply chain technology that can help them assess alternatives and balance trade-offs to achieve desired business results. LLamasoft provides these capabilities with an AI-powered cloud platform that empowers companies to make smarter supply chain decisions, faster,” the company wrote in a statement.

Llamasoft was founded in 2002 in Ann Arbor, Michigan and has raised more than $56 million, according to Crunchbase data. Its largest raise was a $50 million Series B in 2015 led by Goldman Sachs .

The company generated more than $100 million in revenue and has 650 big customers, including Boeing, DHL, Kimberly-Clark and GM, according to company data.

Coupa has been extremely acquisitive over the years, buying 17 companies, according to Crunchbase data. This deal represents the fourth acquisition this year for the company. So far the stock market is not enamored with the acquisition; the company’s stock price is down 5.20% at publication.

Oct
21
2020
--

Contrast launches its security observability platform

Contrast, a developer-centric application security company with customers that include Liberty Mutual Insurance, NTT Data, AXA and Bandwidth, today announced the launch of its security observability platform. The idea here is to offer developers a single pane of glass to manage an application’s security across its lifecycle, combined with real-time analysis and reporting, as well as remediation tools.

“Every line of code that’s happening increases the risk to a business if it’s not secure,” said Contrast CEO and chairman Alan Naumann. “We’re focused on securing all that code that businesses are writing for both automation and digital transformation.”

Over the course of the last few years, the well-funded company, which raised a $65 million Series D round last year, launched numerous security tools that cover a wide range of use cases, from automated penetration testing to cloud application security and now DevOps — and this new platform is meant to tie them all together.

DevOps, the company argues, is really what necessitates a platform like this, given that developers now push more code into production than ever — and the onus of ensuring that this code is secure is now also often on that.

Image Credits: Contrast

Traditionally, Naumann argues, security services focused on the code itself and looking at traffic.

“We think at the application layer, the same principles of observability apply that have been used in the IT infrastructure space,” he said. “Specifically, we do instrumentation of the code and we weave security sensors into the code as it’s being developed and are looking for vulnerabilities and observing running code. […] Our view is: the world’s most complex systems are best when instrumented, whether it’s an airplane, a spacecraft, an IT infrastructure. We think the same is true for code. So our breakthrough is applying instrumentation to code and observing for security vulnerabilities.”

With this new platform, Contrast is aggregating information from its existing systems into a single dashboard. And while Contrast observes the code throughout its lifecycle, it also scans for vulnerabilities whenever a developers check code into the CI/CD pipeline, thanks to integrations with most of the standard tools like Jenkins. It’s worth noting that the service also scans for vulnerabilities in open-source libraries. Once deployed, Contrast’s new platform keeps an eye on the data that runs through the various APIs and systems the application connects to and scans for potential security issues there as well.

The platform currently supports all of the large cloud providers, like AWS, Azure and Google Cloud, and languages and frameworks, like Java, Python, .NET and Ruby.

Image Credits: Contrast

Oct
20
2020
--

Splunk acquires Plumbr and Rigor to build out its observability platform

Data platform Splunk today announced that it has acquired two startups, Plumbr and Rigor, to build out its new Observability Suite, which is also launching today. Plumbr is an application performance monitoring service, while Rigor focuses on digital experience monitoring, using synthetic monitoring and optimization tools to help businesses optimize their end-user experiences. Both of these acquisitions complement the technology and expertise Splunk acquired when it bought SignalFx for over $1 billion last year.

Splunk did not disclose the price of these acquisitions, but Estonia-based Plumbr had raised about $1.8 million, while Atlanta-based Rigor raised a debt round earlier this year.

When Splunk acquired SignalFx, it said it did so in order to become a leader in observability and APM. As Splunk CTO Tim Tully told me, the idea here now is to accelerate this process.

Image Credits: Splunk

“Because a lot of our users and our customers are moving to the cloud really, really quickly, the way that they monitor [their] applications changed because they’ve gone to serverless and microservices a ton,” he said. “So we entered that space with those acquisitions, we quickly folded them together with these next two acquisitions. What Plumbr and Rigor do is really fill out more of the portfolio.”

He noted that Splunk was especially interested in Plumbr’s bytecode implementation and its real-user monitoring capabilities, and Rigor’s synthetics capabilities around digital experience monitoring (DEM). “By filling in those two pieces of the portfolio, it gives us a really amazing set of solutions because DEM was the missing piece for our APM strategy,” Tully explained.

Image Credits: Splunk

With the launch of its Observability Suite, Splunk is now pulling together a lot of these capabilities into a single product — which also features a new design that makes it stand apart from the rest of Splunk’s tools. It combines logs, metrics, traces, digital experience, user monitoring, synthetics and more.

“At Yelp, our engineers are responsible for hundreds of different microservices, all aimed at helping people find and connect with great local businesses,” said Chris Gordon, Technical Lead at Yelp, where his team has been testing the new suite. “Our Production Observability team collaborates with Engineering to improve visibility into the performance of key services and infrastructure. Splunk gives us the tools to empower engineers to monitor their own services as they rapidly ship code, while also providing the observability team centralized control and visibility over usage to ensure we’re using our monitoring resources as efficiently as possible.”

Jun
09
2020
--

IBM Cloud suffers prolonged outage

The IBM Cloud is currently suffering a major outage, and with that, multiple services that are hosted on the platform are also down, including everybody’s favorite tech news aggregator, Techmeme.

It looks like the problems started around 2:30pm PT and spread from there. Best we can tell, this is a worldwide problem and involves a networking issue, but IBM’s own status page isn’t actually loading anymore and returns an internal server error, so we don’t quite know the extent of the outage or what triggered it. IBM Cloud’s Twitter account has also remained silent, though we found a status page for IBM Aspera hosted on a third-party server, which seems to confirm that this is likely a worldwide networking issue.

IBM Cloud, which published a paper about ensuring zero downtime in April, also suffered a minor outage in its Dallas data center in March.

We’ve reached out to IBM’s PR team and will update this post once we get more information.

Apr
29
2020
--

Puppet names former Cloud Foundry Foundation executive director Abby Kearns as CTO

Puppet, the Portland-based infrastructure automation company, today announced that it has named former Cloud Foundry Foundation executive director Abby Kearns as its new CTO.

Current Puppet CTO Deepak Giridharagopal will remain in his role and focus on R&D and leading new projects, while Kearns will focus on expanding the company’s product portfolio and communicating with enterprise audiences.

Kearns stepped down from her role at the Cloud Foundry Foundation earlier this month after holding that position since 2016. At the time, she wasn’t quite ready to reveal her next move, though, and her taking the CTO job at Puppet comes as a bit of a surprise. Despite a lot of usage and hype in its early days, Puppet isn’t often seen as an up-and-coming company anymore, after all. But Kearns argues that a lot of this is due to perception.

“Puppet had great technology and really drove the early DevOps movement, but they kind of fell off the face of the map,” she said. “Nobody thought of them as anything other than config management, and so I was like, well, you know, problem number one: fix that perception problem if that’s no longer the reality or otherwise, everyone thinks you’re dead.”

Since Kearns had already started talking to Puppet CEO Yvonne Wassenaar, who took the job in January 2019, she joined the product advisory board about a year ago and the discussion about Kearns joining the company became serious a few months later.

“We started talking earlier this year,” said Kearns. “She said: ‘You know, wouldn’t it be great if you could come help us? I’m building out a brand new executive team. We’re really trying to reshape the company.’ And I got really excited about the team that she built. She’s got a really fantastic new leadership team, all of them are there for less than a year. they have a new CRO, new CMO. She’s really assembled a fantastic team of people that are super smart, but also really thoughtful people.”

Kearns argues that Puppet’s product has really changed, but that the company didn’t really talk about it enough, despite the fact that 80% of the Global 5,000 are customers.

Given the COVID-19 pandemic, Kearns has obviously not been able to meet the Puppet team yet, but she told me that she’s starting to dig deeper into the company’s product portfolio and put together a strategy. “There’s just such an immensely talented team here. And I realize every startup tells you that, but really, there’s actually a lot of talented people here that are really nice. And I guess maybe it’s the Portland in them, but everyone’s nice,” she said.

“Abby is keenly aware of Puppet’s mission, having served on our Product Advisory Board for the last year, and is a technologist at heart,” said Wassenaar. “She brings a great balance to this position for us – she has deep experience in the enterprise and understands how to solve problems at massive scale.”

In addition to Kearns, former Cloud Foundry Foundation VP of marketing Devin Davis also joined Puppet as the company’s VP of corporate marketing and communications.

Update: we updated the post to clarify that Deepak Giridharagopal will remain in his role.

Apr
22
2020
--

Fishtown Analytics raises $12.9M Series A for its open-source analytics engineering tool

Philadelphia-based Fishtown Analytics, the company behind the popular open-source data engineering tool dbt, today announced that it has raised a $12.9 million Series A round led by Andreessen Horowitz, with the firm’s general partner Martin Casado joining the company’s board.

“I wrote this blog post in early 2016, essentially saying that analysts needed to work in a fundamentally different way,” Fishtown founder and CEO Tristan Handy told me, when I asked him about how the product came to be. “They needed to work in a way that much more closely mirrored the way the software engineers work and software engineers have been figuring this shit out for years and data analysts are still like sending each other Microsoft Excel docs over email.”

The dbt open-source project forms the basis of this. It allows anyone who can write SQL queries to transform data and then load it into their preferred analytics tools. As such, it sits in-between data warehouses and the tools that load data into them on one end, and specialized analytics tools on the other.

As Casado noted when I talked to him about the investment, data warehouses have now made it affordable for businesses to store all of their data before it is transformed. So what was traditionally “extract, transform, load” (ETL) has now become “extract, load, transform” (ELT). Andreessen Horowitz is already invested in Fivetran, which helps businesses move their data into their warehouses, so it makes sense for the firm to also tackle the other side of this business.

“Dbt is, as far as we can tell, the leading community for transformation and it’s a company we’ve been tracking for at least a year,” Casado said. He also argued that data analysts — unlike data scientists — are not really catered to as a group.

Before this round, Fishtown hadn’t raised a lot of money, even though it has been around for a few years now, except for a small SAFE round from Amplify.

But Handy argued that the company needed this time to prove that it was on to something and build a community. That community now consists of more than 1,700 companies that use the dbt project in some form and over 5,000 people in the dbt Slack community. Fishtown also now has over 250 dbt Cloud customers and the company signed up a number of big enterprise clients earlier this year. With that, the company needed to raise money to expand and also better service its current list of customers.

“We live in Philadelphia. The cost of living is low here and none of us really care to make a quadro-billion dollars, but we do want to answer the question of how do we best serve the community,” Handy said. “And for the first time, in the early part of the year, we were like, holy shit, we can’t keep up with all of the stuff that people need from us.”

The company plans to expand the team from 25 to 50 employees in 2020 and with those, the team plans to improve and expand the product, especially its IDE for data analysts, which Handy admitted could use a bit more polish.

Feb
24
2020
--

Databricks makes bringing data into its ‘lakehouse’ easier

Databricks today announced the launch of its new Data Ingestion Network of partners and the launch of its Databricks Ingest service. The idea here is to make it easier for businesses to combine the best of data warehouses and data lakes into a single platform — a concept Databricks likes to call “lakehouse.”

At the core of the company’s lakehouse is Delta Lake, Databricks’ Linux Foundation-managed open-source project that brings a new storage layer to data lakes that helps users manage the lifecycle of their data and ensures data quality through schema enforcement, log records and more. Databricks users can now work with the first five partners in the Ingestion Network — Fivetran, Qlik, Infoworks, StreamSets, Syncsort — to automatically load their data into Delta Lake. To ingest data from these partners, Databricks customers don’t have to set up any triggers or schedules — instead, data automatically flows into Delta Lake.

“Until now, companies have been forced to split up their data into traditional structured data and big data, and use them separately for BI and ML use cases. This results in siloed data in data lakes and data warehouses, slow processing and partial results that are too delayed or too incomplete to be effectively utilized,” says Ali Ghodsi, co-founder and CEO of Databricks. “This is one of the many drivers behind the shift to a Lakehouse paradigm, which aspires to combine the reliability of data warehouses with the scale of data lakes to support every kind of use case. In order for this architecture to work well, it needs to be easy for every type of data to be pulled in. Databricks Ingest is an important step in making that possible.”

Databricks VP of Product Marketing Bharath Gowda also tells me that this will make it easier for businesses to perform analytics on their most recent data and hence be more responsive when new information comes in. He also noted that users will be able to better leverage their structured and unstructured data for building better machine learning models, as well as to perform more traditional analytics on all of their data instead of just a small slice that’s available in their data warehouse.

Jul
29
2019
--

Microsoft acquires data privacy and governance service BlueTalon

Microsoft today announced that it has acquired BlueTalon, a data privacy and governance service that helps enterprises set policies for how their employees can access their data. The service then enforces those policies across most popular data environments and provides tools for auditing policies and access, too.

Neither Microsoft nor BlueTalon disclosed the financial details of the transaction. Ahead of today’s acquisition, BlueTalon had raised about $27.4 million, according to Crunchbase. Investors include Bloomberg Beta, Maverick Ventures, Signia Venture Partners and Stanford’s StartX fund.

BlueTalon Policy Engine How it works

“The IP and talent acquired through BlueTalon brings a unique expertise at the apex of big data, security and governance,” writes Rohan Kumar, Microsoft’s corporate VP for Azure Data. “This acquisition will enhance our ability to empower enterprises across industries to digitally transform while ensuring right use of data with centralized data governance at scale through Azure.”

Unsurprisingly, the BlueTalon team will become part of the Azure Data Governance group, where the team will work on enhancing Microsoft’s capabilities around data privacy and governance. Microsoft already offers access and governance control tools for Azure, of course. As virtually all businesses become more data-centric, though, the need for centralized access controls that work across systems is only going to increase and new data privacy laws aren’t making this process easier.

“As we began exploring partnership opportunities with various hyperscale cloud providers to better serve our customers, Microsoft deeply impressed us,” BlueTalon CEO Eric Tilenius, who has clearly read his share of “our incredible journey” blog posts, explains in today’s announcement. “The Azure Data team was uniquely thoughtful and visionary when it came to data governance. We found them to be the perfect fit for us in both mission and culture. So when Microsoft asked us to join forces, we jumped at the opportunity.”

Feb
20
2019
--

Why Daimler moved its big data platform to the cloud

Like virtually every big enterprise company, a few years ago, the German auto giant Daimler decided to invest in its own on-premises data centers. And while those aren’t going away anytime soon, the company today announced that it has successfully moved its on-premises big data platform to Microsoft’s Azure cloud. This new platform, which the company calls eXtollo, is Daimler’s first major service to run outside of its own data centers, though it’ll probably not be the last.

As Daimler’s head of its corporate center of excellence for advanced analytics and big data Guido Vetter told me, the company started getting interested in big data about five years ago. “We invested in technology — the classical way, on-premise — and got a couple of people on it. And we were investigating what we could do with data because data is transforming our whole business as well,” he said.

By 2016, the size of the organization had grown to the point where a more formal structure was needed to enable the company to handle its data at a global scale. At the time, the buzz phrase was “data lakes” and the company started building its own in order to build out its analytics capacities.

Electric lineup, Daimler AG

“Sooner or later, we hit the limits as it’s not our core business to run these big environments,” Vetter said. “Flexibility and scalability are what you need for AI and advanced analytics and our whole operations are not set up for that. Our backend operations are set up for keeping a plant running and keeping everything safe and secure.” But in this new world of enterprise IT, companies need to be able to be flexible and experiment — and, if necessary, throw out failed experiments quickly.

So about a year and a half ago, Vetter’s team started the eXtollo project to bring all the company’s activities around advanced analytics, big data and artificial intelligence into the Azure Cloud, and just over two weeks ago, the team shut down its last on-premises servers after slowly turning on its solutions in Microsoft’s data centers in Europe, the U.S. and Asia. All in all, the actual transition between the on-premises data centers and the Azure cloud took about nine months. That may not seem fast, but for an enterprise project like this, that’s about as fast as it gets (and for a while, it fed all new data into both its on-premises data lake and Azure).

If you work for a startup, then all of this probably doesn’t seem like a big deal, but for a more traditional enterprise like Daimler, even just giving up control over the physical hardware where your data resides was a major culture change and something that took quite a bit of convincing. In the end, the solution came down to encryption.

“We needed the means to secure the data in the Microsoft data center with our own means that ensure that only we have access to the raw data and work with the data,” explained Vetter. In the end, the company decided to use the Azure Key Vault to manage and rotate its encryption keys. Indeed, Vetter noted that knowing that the company had full control over its own data was what allowed this project to move forward.

Vetter tells me the company obviously looked at Microsoft’s competitors as well, but he noted that his team didn’t find a compelling offer from other vendors in terms of functionality and the security features that it needed.

Today, Daimler’s big data unit uses tools like HD Insights and Azure Databricks, which covers more than 90 percents of the company’s current use cases. In the future, Vetter also wants to make it easier for less experienced users to use self-service tools to launch AI and analytics services.

While cost is often a factor that counts against the cloud, because renting server capacity isn’t cheap, Vetter argues that this move will actually save the company money and that storage costs, especially, are going to be cheaper in the cloud than in its on-premises data center (and chances are that Daimler, given its size and prestige as a customer, isn’t exactly paying the same rack rate that others are paying for the Azure services).

As with so many big data AI projects, predictions are the focus of much of what Daimler is doing. That may mean looking at a car’s data and error code and helping the technician diagnose an issue or doing predictive maintenance on a commercial vehicle. Interestingly, the company isn’t currently bringing to the cloud any of its own IoT data from its plants. That’s all managed in the company’s on-premises data centers because it wants to avoid the risk of having to shut down a plant because its tools lost the connection to a data center, for example.

Jan
24
2019
--

Humio raises $9M Series A for its real-time log analysis platform

Humio, a startup that provides a real-time log analysis platform for on-premises and cloud infrastructures, today announced that it has raised a $9 million Series A round led by Accel. It previously raised its seed round from WestHill and Trifork.

The company, which has offices in San Francisco, the U.K. and Denmark, tells me that it saw a 13x increase in its annual revenue in 2018. Current customers include Bloomberg, Microsoft and Netlify .

“We are experiencing a fundamental shift in how companies build, manage and run their systems,” said Humio CEO Geeta Schmidt. “This shift is driven by the urgency to adopt cloud-based and microservice-driven application architectures for faster development cycles, and dealing with sophisticated security threats. These customer requirements demand a next-generation logging solution that can provide live system observability and efficiently store the massive amounts of log data they are generating.”

To offer them this solution, Humio raised this round with an eye toward fulfilling the demand for its service, expanding its research and development teams and moving into more markets across the globe.

As Schmidt also noted, many organizations are rather frustrated by the log management and analytics solutions they currently have in place. “Common frustrations we hear are that legacy tools are too slow — on ingestion, searches and visualizations — with complex and costly licensing models,” she said. “Ops teams want to focus on operations — not building, running and maintaining their log management platform.”

To build this next-generation analysis tool, Humio built its own time series database engine to ingest the data, with open-source tools like Scala, Elm and Kafka in the backend. As data enters the pipeline, it’s pushed through live searches and then stored for later queries. As Humio VP of Engineering Christian Hvitved tells me, though, running ad-hoc queries is the exception, and most users only do so when they encounter bugs or a DDoS attack.

The query language used for the live filters is also pretty straightforward. That was a conscious decision, Hvitved said. “If it’s too hard, then users don’t ask the question,” he said. “We’re inspired by the Unix philosophy of using pipes, so in Humio, larger searches are built by combining smaller searches with pipes. This is very familiar to developers and operations people since it is how they are used to using their terminal.”

Humio charges its customers based on how much data they want to ingest and for how long they want to store it. Pricing starts at $200 per month for 30 days of data retention and 2 GB of ingested data.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com