Aug
20
2019
--

The Impact of Downtime on Your Business

Impact of Downtime

Impact of DowntimeOur latest white paper The Hidden Costs of Not Properly Managing Your Databases discusses the potential losses that you can incur if you do not have a properly configured database and infrastructure setup. A Google study recently found that if an application does not respond in three seconds or less, more than half of mobile website users will leave your site. Even if your system is up, but running slowly, impatient users may perceive this as an outage and go elsewhere.

The impact on your bottom line can be very real. Large social media and gaming websites stand to lose tens of thousands (in some cases hundreds of thousands) of users due to slow applications and websites. Users expect continuous and consistent service. It is essential that you optimize your applications and websites to ensure the best and fastest possible performance.

The Cost of Downtime

A Gartner study estimated the average cost per minute of downtime at $5,600 dollars. At $300,000 per hour, this is a price that few companies can comfortably afford. It is alarming how few well-established companies lack an adequate plan to deal with unexpected downtime.

Our white paper outlines some common disaster recovery strategies which every company should consider implementing. We offer advice on reducing the overall costs of running your database and minimizing the risk of downtime and performance spikes.

To accompany this white paper, we welcome you to use one of our free, bespoke, cost-estimation tools:

The Database Management Savings Calculator gives you a breakdown of how much you could save over a three-year period by implementing different database optimization methods. Potential savings can be substantial, up to 50% of your total database costs over a three-year period.

The Database Downtime Calculator estimates the financial impact that a slow response or downtime can have on your business. This calculator takes into account the revenue that your application generates and factors in how many employees are affected by an outage, as well as how long it would take to restore service.

Reducing the likelihood and impact of downtime requires a solid process, a high-level of expertise, and the right resources in place. Percona has extensive experience advising companies on the best way to configure, manage, and run their databases. We can help you reduce slowdowns, improve workload, and substantially delay the need for upgrades.

Download our white paper for further insight, or contact us directly to discuss how we can help ensure your database is working efficiently, and advise you on managing and mitigating downtime.

Hidden Cost of not managing database

Aug
20
2019
--

Reputation.com nabs $30M more to help enterprises manage their profiles online

In these days where endorsements from influential personalities online can make or break a product, a startup that’s built a business to help companies harness all the long-tail firepower they can muster to get their name out there in a good way has raised some funding to expand deeper into feedback and other experience territory. Reputation.com, which works with big enterprises in areas like automotive and healthcare to help improve their visibility online and provide more accurate reports to the businesses about how their brands are perceived by customers and others, has raised $30 million in equity financing, money that CEO Joe Fuca said the company will use to continue to expand its tech platform to source more feedback and to future-proof it for further global expansion.

The funding — led by Ascension Ventures, with participation also from new backers Akkadian Ventures, Industry Ventures and River City Ventures and returning investors Kleiner Perkins, August Capital, Bessemer Venture Partners, Heritage Group and Icon Ventures — is the second round Reputation.com has raised since its pivot away from services aimed at individuals. Fuca said the company’s valuation is tripling with this round, and while he wouldn’t go into the details from what I understand from sources (which is supported by data in PitchBook), it had been around $120-130 million in its last round, making it now valued at between $360-390 million now.

Part of the reason that the company’s valuation has tripled is because of its growth. The company doesn’t disclose many customer names (for possibly obvious reasons) but said that three of the top five automotive OEMs and as well as over 10,000 auto dealerships in the U.S. use it, with those numbers now also growing in Europe. Among healthcare providers, it now has 250 customers — including three of the top five — and in the world of property management, more than 100 companies are using Reputation.com. Other verticals that use the company include financial services, hospitality and retail services.

The company competes with other firms that provide services like SEO and other online profile profile management and sees the big challenge as trying to convince businesses that there is more to having a strong profile than just an NPS score (providers of which are also competitors). So, in addition to the metrics that are usually used to compile this figure (based on customer feedback surveys typically), Reputation.com uses unstructured data as well (for example sentiment analysis from social media) and applies algorithms to this to calculate a Reputation Score.

Reputation.com has been around actually since 2006, with its original concept being managing individuals’ online reputations — not exactly in the Klout or PR-management sense, but with a (now very prescient-sounding) intention of providing a way for people to better control their personal information online. Its original name was ReputationDefender and founded by Michael Fertik, it was a pioneer in what came to be called personal information management.

The company proposed an idea of a “vault” for your information, which could still be used and appropriated by so-called data brokers (which help feed the wider ad-tech and marketing tech machines that underpin a large part of the internet economy), but would be done with user consent and compensation.

The idea was hard to scale, however. “I think it was an addressable market issue,” said Fuca, who took over as CEO last year the company was reorienting itself to enterprise services (it sold off the consumer/individual business at the same time to a PE firm), with Fertik taking the role of executive chairman, among other projects. “Individuals seeking reputation defending is only certain market size.”

Not so in the world of enterprise, the area the startup (and I think you can call Reputation.com a startup, given its pivot and restructure and venture backing) has been focusing on exclusively for the better part of a year.

The company today integrates closely with Google — which is not only a major platform for disseminating information in the form of SEO management, but a data source as a repository of user reviews — but despite the fact that Google holds so many cards in the stack, Fuca (who had previously been an exec at DocuSign before coming to Reputation.com) said he doesn’t see it as a potential threat or competitor.

A recent survey from the company about reputation management for the automotive sector underscores just how big of a role Google does play:

Screenshot 2019 08 20 at 11.48.57“We don’t worry about google as competitor,” Fuca said. “It is super attracted to working with partners like us because we drive domain activity, and they love it when people like us explain to customers how to optimise on Google. For Google, it’s almsot like we are an optimization partner and so it helps their entire ecosystem, and so I don’t see them being a competitor or wanting to be.”

Nevertheless, the fact that the bulk of Reputation.com’s data sources are essentially secondary — that is publically available information that is already online and collected by others — will be driving some of the company’s next stage of development. The plan is to start to add in more of its own primary-source data gathering in the form of customer surveys and feedback forms. That will open the door too to more questions of how the company will handle privacy and personal data longer term.

“Ascension Ventures is excited to deepen its partnership with Reputation.com as it enters its next critical stage of growth,” said John Kuelper, Managing Director at Ascension Ventures, in a statement. “We’ve watched Reputation.com’s industry leading reputation management offering grow into an even more expansive CX platform. We’re seeing some of the world’s largest brands and service providers achieve terrific results by partnering with Reputation.com to analyze and take action on customer feedback — wherever it originates — at scale and in real-time. We’re excited to make this additional investment in Reputation.com as it continues to grow and expand its market leadership.”

Aug
19
2019
--

The five technical challenges Cerebras overcame in building the first trillion-transistor chip

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today — and it is a doozy. The “Wafer Scale Engine” is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative).

CS Wafer Keyboard Comparison

Cerebras’ Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems).

It’s made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry’s big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees. You can read more about the chip from Tiernan Ray at Fortune and read the white paper from Cerebras itself.

Superlatives aside though, the technical challenges that Cerebras had to overcome to reach this milestone I think is the more interesting story here. I sat down with founder and CEO Andrew Feldman this afternoon to discuss what his 173 engineers have been building quietly just down the street here these past few years, with $112 million in venture capital funding from Benchmark and others.

Going big means nothing but challenges

First, a quick background on how the chips that power your phones and computers get made. Fabs like TSMC take standard-sized silicon wafers and divide them into individual chips by using light to etch the transistors into the chip. Wafers are circles and chips are squares, and so there is some basic geometry involved in subdividing that circle into a clear array of individual chips.

One big challenge in this lithography process is that errors can creep into the manufacturing process, requiring extensive testing to verify quality and forcing fabs to throw away poorly performing chips. The smaller and more compact the chip, the less likely any individual chip will be inoperative, and the higher the yield for the fab. Higher yield equals higher profits.

Cerebras throws out the idea of etching a bunch of individual chips onto a single wafer in lieu of just using the whole wafer itself as one gigantic chip. That allows all of those individual cores to connect with one another directly — vastly speeding up the critical feedback loops used in deep learning algorithms — but comes at the cost of huge manufacturing and design challenges to create and manage these chips.

CS Wafer Sean

Cerebras’ technical architecture and design was led by co-founder Sean Lie. Feldman and Lie worked together on a previous startup called SeaMicro, which sold to AMD in 2012 for $334 million (via Cerebras Systems).

The first challenge the team ran into, according to Feldman, was handling communication across the “scribe lines.” While Cerebras’ chip encompasses a full wafer, today’s lithography equipment still has to act like there are individual chips being etched into the silicon wafer. So the company had to invent new techniques to allow each of those individual chips to communicate with each other across the whole wafer. Working with TSMC, they not only invented new channels for communication, but also had to write new software to handle chips with trillion-plus transistors.

The second challenge was yield. With a chip covering an entire silicon wafer, a single imperfection in the etching of that wafer could render the entire chip inoperative. This has been the block for decades on whole-wafer technology: due to the laws of physics, it is essentially impossible to etch a trillion transistors with perfect accuracy repeatedly.

Cerebras approached the problem using redundancy by adding extra cores throughout the chip that would be used as backup in the event that an error appeared in that core’s neighborhood on the wafer. “You have to hold only 1%, 1.5% of these guys aside,” Feldman explained to me. Leaving extra cores allows the chip to essentially self-heal, routing around the lithography error and making a whole-wafer silicon chip viable.

Entering uncharted territory in chip design

Those first two challenges — communicating across the scribe lines between chips and handling yield — have flummoxed chip designers studying whole-wafer chips for decades. But they were known problems, and Feldman said that they were actually easier to solve than expected by re-approaching them using modern tools.

He likens the challenge to climbing Mount Everest. “It’s like the first set of guys failed to climb Mount Everest, they said, ‘Shit, that first part is really hard.’ And then the next set came along and said ‘That shit was nothing. That last hundred yards, that’s a problem.’ ”

And indeed, the toughest challenges, according to Feldman, for Cerebras were the next three, since no other chip designer had gotten past the scribe line communication and yield challenges to actually find what happened next.

The third challenge Cerebras confronted was handling thermal expansion. Chips get extremely hot in operation, but different materials expand at different rates. That means the connectors tethering a chip to its motherboard also need to thermally expand at precisely the same rate, lest cracks develop between the two.

As Feldman explained, “How do you get a connector that can withstand [that]? Nobody had ever done that before, [and so] we had to invent a material. So we have PhDs in material science, [and] we had to invent a material that could absorb some of that difference.”

Once a chip is manufactured, it needs to be tested and packaged for shipment to original equipment manufacturers (OEMs) who add the chips into the products used by end customers (whether data centers or consumer laptops). There is a challenge though: Absolutely nothing on the market is designed to handle a whole-wafer chip.

CS Wafer Inspection

Cerebras designed its own testing and packaging system to handle its chip (via Cerebras Systems).

“How on earth do you package it? Well, the answer is you invent a lot of shit. That is the truth. Nobody had a printed circuit board this size. Nobody had connectors. Nobody had a cold plate. Nobody had tools. Nobody had tools to align them. Nobody had tools to handle them. Nobody had any software to test,” Feldman explained. “And so we have designed this whole manufacturing flow, because nobody has ever done it.” Cerebras’ technology is much more than just the chip it sells — it also includes all of the associated machinery required to actually manufacture and package those chips.

Finally, all that processing power in one chip requires immense power and cooling. Cerebras’ chip uses 15 kilowatts of power to operate — a prodigious amount of power for an individual chip, although relatively comparable to a modern-sized AI cluster. All that power also needs to be cooled, and Cerebras had to design a new way to deliver both for such a large chip.

It essentially approached the problem by turning the chip on its side, in what Feldman called “using the Z-dimension.” The idea was that rather than trying to move power and cooling horizontally across the chip as is traditional, power and cooling are delivered vertically at all points across the chip, ensuring even and consistent access to both.

And so, those were the next three challenges — thermal expansion, packaging and power/cooling — that the company has worked around-the-clock to deliver these past few years.

From theory to reality

Cerebras has a demo chip (I saw one, and yes, it is roughly the size of my head), and it has started to deliver prototypes to customers, according to reports. The big challenge, though, as with all new chips, is scaling production to meet customer demand.

For Cerebras, the situation is a bit unusual. Because it places so much computing power on one wafer, customers don’t necessarily need to buy dozens or hundreds of chips and stitch them together to create a compute cluster. Instead, they may only need a handful of Cerebras chips for their deep-learning needs. The company’s next major phase is to reach scale and ensure a steady delivery of its chips, which it packages as a whole system “appliance” that also includes its proprietary cooling technology.

Expect to hear more details of Cerebras technology in the coming months, particularly as the fight over the future of deep learning processing workflows continues to heat up.

Aug
19
2019
--

Join The New Stack for Pancake & Podcast with Q&A at TC Sessions: Enterprise

Popular enterprise news and research site The New Stack is coming to TechCrunch Sessions: Enterprise on September 5 for a special Pancake & Podcast session with live Q&A, featuring, you guessed it, delicious pancakes and awesome panelists!

Here’s the “short stack” of what’s going to happen:

  • Pancake buffet opens at 7:45 am on Thursday, September 5 at TC Sessions: Enterprise
  • At 8:15 am the panel discussion/podcast kicks off; the topic, “The People and Technology You Need to Build a Modern Enterprise
  • After the discussion, the moderators will host a live audience Q&A session with the panelists
  • Once the Q&A is done, attendees will get the chance to win some amazing raffle prizes

You can only take part in this fun pancake-breakfast podcast if you register for a ticket to  TC Sessions: Enterprise. Use the code TNS30 to get 30% off the conference registration price!

Here’s the longer version of what’s going to happen:

At 8:15 a.m., The New Stack founder and publisher Alex Williams takes the stage as the moderator and host of the panel discussion. Our topic: “The People and Technology You Need to Build a Modern Enterprise.” We’ll start with intros of our panelists and then dive into the topic with Sid Sijbrandij, founder and CEO at GitLab, and Frederic Lardinois, enterprise reporter and editor at TechCrunch, as our initial panelists. More panelists to come!

Then it’s time for questions. Questions we could see getting asked (hint, hint): Who’s on your team? What makes a great technical team for the enterprise startup? What are the observations a journalist has about how the enterprise is changing? What about when the time comes for AI? Who will I need on my team?

And just before 9 a.m., we’ll pick a ticket out of the hat and announce our raffle winner. It’s the perfect way to start the day.

On a side note, the pancake breakfast discussion will be published as a podcast on The New Stack Analysts

But there’s only one way to get a prize and network with fellow attendees, and that’s by registering for TC Sessions: Enterprise and joining us for a short stack with The New Stack. Tickets are now $349, but you can save 30% with code TNS30.

Aug
19
2019
--

Percona Distribution for PostgreSQL 11 (Beta) Is Now Available

Percona Distribution for PostgreSQL 11

Percona Distribution for PostgreSQL 11Percona is excited to announce the beta release of Percona Distribution for PostgreSQL 11 on August 19, 2019. This release signifies a change in Percona’s software release process. In order to provide a full-featured and holistic solution, we are introducing a product distribution: a database product and tools that enable solving essential practical tasks efficiently.

Percona Distribution for PostgreSQL is a collection of tools to assist you in managing your PostgreSQL database systemIt installs PostgreSQL v11.5 and complements it by a selection of extensions:

    • Additional extensions supported by PostgreSQL Global Development Group
    • pg_repack is a third-party extension to rebuild PostgreSQL database objects
    • pgaudit is a third-party extension that provides detailed session or object audit logging via the standard logging facility of PostgreSQL
    • pgBackRest is a third-party backup tool which is a better alternative to the built-in PostgreSQL backup solution (pg_basebackup)
    • Patroni is an HA (high availability) solution for PostgreSQL, widely used and supported by a strong community

Switching to Percona Distribution for PostgreSQL is safe: there is no need for dump and restore or additional downtime. Having installed Percona Distribution for PostgreSQL, you may continue to use existing PostgreSQL databases.

All tools that make up Percona Distribution for PostgreSQL are open-source and free. Find the release notes for Percona Distribution for PostgreSQL 11 in our online documentation. Report bugs in the Jira bug tracker.

This is a Beta Release

Bear in mind that this is a beta release. Percona Distribution for PostgreSQL 11 has undergone intensive testing in-house. However, errors may be encountered, and some design changes may occur before the GA release slated for later this year.

Aug
19
2019
--

Percona Live Europe 2019 Schedule

Percona Live Europe 2019

Percona Live EuropeIf you’ve been waiting to see the schedule before you commit to a ticket, wait no longer! With just a few spots to be confirmed before release, the bulk of the Percona Live Europe 2019 agenda can now be seen. We’ve said it before, but the quality of the submissions was high and it’s been a tough task to hone this down to the final schedule. We had to make more than a few tough decisions.

Thank you again to all of you who have submitted for this event, and to the Committee for their dedicated work on selecting the community’s picks.

We have allocated the talks to tracks according to their business theme this year. The good news is that if you want to follow just one specific technology – say, PostgreSQL – all talks will be tagged by their technology in the final program, so you can follow them easily no matter which track they appear under

Don’t forget to register soon as the discounted rate only lasts until September 1.

Time for some more of those featured talks…

You’ll find the abstracts for these talks on our agenda. We have over 80 speakers at this year’s event addressing all aspects of open source database technology.

  • Test Like a Boss: Deploy and Test Complex Topologies With a Single Command, presented by Giuseppe Maxia
  • Why PostgreSQL is Becoming a Migration Target in Large Enterprises, presented by Jobin Augustine, Percona
  • MyRocks and RocksDB Advanced Features and Performance, presented by Yoshinori Matsunobu, Facebook
  • New Indexing and Aggregation Pipeline Capabilities in MongoDB 4.2, presented by Antonios Giannopoulos, ObjectRocket by Rackspace
  • Backing Up Wikipedia Databases, presented by Jaime Crespo, Manuel Arostegui of Wikipedia
  • How a Modern Database Gets Your Data Fast: MariaDB Query Optimizer by Vicen?iu Ciorbaru, MariaDB Foundation
  • Benchmarking Should Never Be Optional, Art van Scheppingen, Messagebird

Tutorial Schedule

Monday, September 30 is the tutorial day, offering a range of hands-on half-day sessions to get you up to speed with open source database technologies, along with a full day’s tutorial introducing MySQL administration. The half-day sessions include:

  • Innodb Architecture and Performance Optimization Tutorial for MySQL 8 presented by Peter Zaitsev, Percona
  • Introduction to PL/pgSQL Development presented by Jim Mlodgenski, AWS
  • MariaDB Server 10.4: The Complete Tutorial presented by Colin Charles, Consultant
  • MySQL 8.0 InnoDB Cluster – Easiest Tutorial! presented by Frédéric Descamps and Kenny Gryp, Oracle
  • Open Source Database Performance Optimization and Monitoring with PMM, presented by Michael Coburn, Percona
  • Getting Started with Kubernetes and XtraDB Cluster, presented by Matthew Boehm, Percona
  • MySQL, MariaDB & PostgreSQL as PaaS on Azure, presented by Jose Manuel Jurado Diaz,  Vitor Pombeiro, and Hamza Aqel, Microsoft
  • Percona XtraDB Cluster Tutorial, presented by Tibor Köröcz, Percona
  • PostgreSQL For Oracle and MySQL DBAs and For Beginners, Avinash Vallarapu, Percona
  • Percona Server for MongoDB, presented by Adamo Tonete and Vinicius Grippa, Percona
  • MySQL 101 presented by Tom De Cooman and Dimitri Vanervobeke, Percona

Convinced? Then register now before Sep 01 for discounted prices.

Aug
19
2019
--

Ally raises $8M Series A for its OKR solution

OKRs, or Objectives and Key Results, are a popular planning method in Silicon Valley. Like most of those methods that make you fill in some form once every quarter, I’m pretty sure employees find them rather annoying and a waste of their time. Ally wants to change that and make the process more useful. The company today announced that it has raised an $8 million Series A round led by Accel Partners, with participation from Vulcan Capital, Founders Co-op and Lee Fixel. The company, which launched in 2018, previously raised a $3 million seed round.

Ally founder and CEO Vetri Vellore tells me that he learned his management lessons and the value of OKR at his last startup, Chronus. After years of managing large teams at enterprises like Microsoft, he found himself challenged to manage a small team at a startup. “I went and looked for new models of running a business execution. And OKRs were one of those things I stumbled upon. And it worked phenomenally well for us,” Vellore said. That’s where the idea of Ally was born, which Vellore pursued after selling his last startup.

Most companies that adopt this methodology, though, tend to work with spreadsheets and Google Docs. Over time, that simply doesn’t work, especially as companies get larger. Ally, then, is meant to replace these other tools. The service is currently in use at “hundreds” of companies in more than 70 countries, Vellore tells me.

One of its early adopters was Remitly . “We began by using shared documents to align around OKRs at Remitly. When it came time to roll out OKRs to everyone in the company, Ally was by far the best tool we evaluated. OKRs deployed using Ally have helped our teams align around the right goals and have ultimately driven growth,” said Josh Hug, COO of Remitly.

Desktop Team OKRs Screenshot

Vellore tells me that he has seen teams go from annual or bi-annual OKRs to more frequently updated goals, too, which is something that’s easier to do when you have a more accessible tool for it. Nobody wants to use yet another tool, though, so Ally features deep integrations into Slack, with other integrations in the works (something Ally will use this new funding for).

Since adopting OKRs isn’t always easy for companies that previously used other methodologies (or nothing at all), Ally also offers training and consulting services with online and on-site coaching.

Pricing for Ally starts at $7 per month per user for a basic plan, but the company also offers a flat $29 per month plan for teams with up to 10 users, as well as an enterprise plan, which includes some more advanced features and single sign-on integrations.

Aug
19
2019
--

The five great reasons to attend TechCrunch’s Enterprise show Sept. 5 in SF

The vast enterprise tech category is Silicon Valley’s richest, and today it’s poised to change faster than ever before. That’s probably the biggest reason to come to TechCrunch’s first-ever show focused entirely on enterprise. But here are five more reasons to commit to joining TechCrunch’s editors on September 5 at San Francisco’s Yerba Buena Center for an outstanding day (agenda here) addressing the tech tsunami sweeping through enterprise. 

No. 1: Artificial intelligence
At once the most consequential and most hyped technology, no one doubts that AI will change business software and increase productivity like few, if any, technologies before it. To peek ahead into that future, TechCrunch will interview Andrew Ng, arguably the world’s most experienced AI practitioner at huge companies (Baidu, Google) as well as at startups. AI will be a theme across every session, but we’ll address it again head-on in a panel with investor Jocelyn Goldfein (Zetta), founder Bindu Reddy (Reality Engines) and executive John Ball (Salesforce / Einstein). 

No. 2: Data, the cloud and Kubernetes
If AI is at the dawn of tomorrow, cloud transformation is the high noon of today. Indeed, 90% of the world’s data was created in the past two years, and no enterprise can keep its data hoard on-prem forever. Azure’s CTO
Mark Russinovitch will discuss Microsft’s vision for the cloud. Leaders in the open-source Kubernetes revolution — Joe Beda (VMware), Aparna Sinha (Google) and others — will dig into what Kubernetes means to companies making the move to cloud. And last, there is the question of how to find signal in all the data — which will bring three visionary founders to the stage: Benoit Dageville (Snowflake), Ali Ghodsi (Databricks) and Murli Thirumale (Portworx). 

No. 3: Everything else on the main stage!
Let’s start with a fireside chat with
SAP CEO Bill McDermott and Qualtrics Chief Experience Officer Julie Larson-Green. We have top investors talking where they are making their bets, and security experts talking data and privacy. And then there is quantum computing, the technology revolution waiting on the other side of AI: Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort, Jim Clarke, the director of quantum hardware at Intel Labs and Krysta Svore, who leads Microsoft’s quantum effort.

All told, there are 21 programming sessions.

No. 4: Network and get your questions answered
There will be two Q&A breakout sessions with top enterprise investors; this is for founders (and anyone else) to query investors directly. Plus, TechCrunch’s unbeatable CrunchMatch app makes it really easy to set up meetings with the other attendees, an
incredible array of folks, plus the 20 early-stage startups exhibiting on the expo floor.

No. 5: SAP
Enterprise giant SAP is our sponsor for the show, and they are not only bringing a squad of top executives, they are producing four parallel track sessions, featuring key SAP Chief Innovation Officer
Max Wessel, SAP Chief Designer and Futurist Martin Wezowski and SAP.IO’s managing director Ram Jambunathan (SAP.iO), in sessions including how to scale-up an enterprise startup, how startups win large enterprise customers, and what the enterprise future looks like.

Check out the complete agenda. Don’t miss this show! This line-up is a view into the future like none other. 

Grab your $349 tickets today, and don’t wait til the day of to book because prices go up at the door!

We still have two Startup Demo Tables left. Each table comes with four tickets and a prime location to demo your startup on the expo floor. Book your demo table now before they’re all gone!

Aug
19
2019
--

Microsoft acquires jClarity, a Java performance tuning tool

Microsoft announced this morning that it was acquiring jClarity, a tool designed to tune the performance of Java applications. It will be doing that on Azure from now on. In addition, the company has been offering a flavor of Java called AdoptOpenJDK, which they bill as a free alternative to Oracle Java. The companies did not discuss the terms of the deal.

As Microsoft pointed out in a blog post announcing the acquisition, they are seeing increasing use of large-scale Java installations on Azure, both internally with platforms like Minecraft and externally with large customers, including Daimler and Adobe.

The company believes that by adding the jClarity team and its toolset, it can help service these Java customers better. “The team, formed by Java champions and data scientists with proven expertise in data driven Java Virtual Machine (JVM) optimizations, will help teams at Microsoft to leverage advancements in the Java platform,” the company wrote in the blog.

Microsoft has actually been part of the AdoptOpenJDK project, along with a Who’s Who of other enterprise companies, including Amazon, IBM, Pivotal, Red Hat and SAP.

Co-founder and CEO Martijn Verburg, writing in a company blog post announcing the deal, unsurprisingly spoke in glowing terms about the company he was about to become a part of. “Microsoft leads the world in backing developers and their communities, and after speaking to their engineering and programme leadership, it was a no brainer to enter formal discussions. With the passion and deep expertise of Microsoft’s people, we’ll be able to support the Java ecosystem better than ever before,” he wrote.

Verburg also took the time to thank the employees, customers and community that have supported the open-source project on top of which his company was built. Verburg’s new title at Microsoft will be Principal Engineering Group Manager (Java) at Microsoft.

It is unclear how the community will react to another flavor of Java being absorbed by another large vendor, or how the other big vendors involved in the project will feel about it, but regardless, jClarity’s flavor of Java and its performance tools are part of Microsoft now.

Note: This article originally stated that all of jClarity’s products are open source. Its performance tools are paid services.

Aug
19
2019
--

Simon Data hauls in $30M Series C to continue building customer data platform

As businesses use an increasing variety of marketing software solutions, the goal around collecting all of that data is to improve customer experience. Simon Data announced a $30 million Series C round today to help.

The round was led by Polaris Partners . Previous investors .406 Ventures and F-Prime Capital also participated. Today’s investment brings the total raised to $59 million, according to the company.

Jason Davis, co-founder and CEO, says his company is trying to pull together a lot of complex data from a variety of sources, while driving actions to improve customer experience. “It’s about taking the data, and then building complex triggers that target the right customer at the right time,” Davis told TechCrunch. He added, “This can be in the context of any sort of customer transaction, or any sort of interaction with the business.”

Companies tend to use a variety of marketing tools, and Simon Data takes on the job of understanding the data and activities going on in each one. Then based on certain actions — such as, say, an abandoned shopping cart — it delivers a consistent message to the customer, regardless of the source of the data that triggered the action.

They see this ability to pull together data as a customer data platform (CDP). In fact, part of its job is to aggregate data and use it as the basis of other activities. In this case, it involves activating actions you define based on what you know about the customer at any given moment in the process.

As the company collects this data, it also sees an opportunity to use machine learning to create more automated and complex types of interactions. “There are a tremendous number of super complex problems we have to solve. Those include core platform or infrastructure, and we also have a tremendous opportunity in front of us on the predictive and data science side as well,” Davis said. He said that is one of the areas where they will put today’s money to work.

The company, which launched in 2014, is based in NYC. The company currently has 87 employees, and that number is expected to grow with today’s announcement. Customers include Equinox, Venmo and WeWork. The company’s most recent funding was a $20 million round in July 2018.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com