Apr
28
2017
--

Tencent to open AI research center in Seattle

 Chinese tech conglomerate Tencent will be opening a new AI research center in Seattle, according to The Information. The company has long had a core office in Palo Alto, but this will be its first major machine intelligence R&D effort in the country. Earlier this week Tencent announced that it would open its first data center in Silicon Valley. Read More

Apr
28
2017
--

Data management startup Rubrik confirms $180M round at a $1.3B valuation

Data flying over group of laptops to illustrate data integration/sharing. Rubrik, a startup that provides data backup and recovery services for enterprises across both cloud and on-premises environments, has closed a $180 million round of funding that values the company at $1.3 billion. The news confirms a report we ran earlier this week noting that the company was raising between $150 million and $200 million. Read More

Apr
28
2017
--

From Percona Live 2017: Thank You, Attendees!

Percona Live 2017

Percona Live 2017From everyone at Percona and Percona Live 2017, we’d like to send a big thank you to all our sponsors, exhibitors, and attendees at this year’s conference.

This year’s conference was an outstanding success! The event brought the open source database community together, with a technical emphasis on the core topics of MySQL, MariaDB, MongoDB, PostgreSQL, AWS, RocksDB, time series, monitoring and other open source database technologies.

We will be posting tutorial and session presentation slides at the Percona Live site, and all of them should be available shortly. 

Highlights This Year:

Thanks to Our Sponsors!

We would like to thank all of our valuable event sponsors, especially our diamond sponsors Continuent and VividCortex – your participation really makes the show happen.

We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.

Download a prospectus here.

Percona Live Europe 2017Percona Live Europe 2017: Dublin, Ireland!

This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We look forward to seeing you there!

Apr
28
2017
--

Cloudera finishes up 20% in stock market debut

 After pricing its IPO at $15 per share, Cloudera, the enterprise big data company, closed the day up more than 20 percent, at $18.09. This also is above the range of $12 to $14 for which Cloudera was preparing. Read More

Apr
28
2017
--

Equity podcast: Earnings clown car and the profitable-ish Dropbox with Hunter Walk

 Welcome back to Equity, TechCrunch’s venture-capital focused podcast where we unpack the numbers behind the narrative. Homebrew’s own Hunter Walk joined us for this episode along with co-hosts Katie Roof, Matthew Lynley and myself. This week has been a delightful clown car of technology earnings, so we dove into Twitter’s surprisingly strong report that drove its share… Read More

Apr
27
2017
--

Investors are betting 3DR can find life after Solo as a drone data platform

 An early player in drone-tech, 3D Robotics Inc. on Thursday announced that it has raised $53 million in a Series D round of funding, including new equity funding and conversion of debt equity. Atlantic Bridge led the round, joined by Autodesk Forge Fund, True Ventures, Foundry Group, Mayfield and other undisclosed investors, according to the company statement. Read More

Apr
27
2017
--

Percona Live 2017: Beringei – Facebook’s Open Source, In-Memory Time Series Database (TSDB)

Beringei

BeringeiSo that is just about a wrap here at Percona Live 2017 – except for the closing comments and prize giveaway. Before we leave, I have one more session to highlight: Facebook’s Beringei.

Beringei is Facebook’s open source, in-memory time series database. Justin Teller, Engineering Manager at Facebook, presented the session. According to Justin, large-scale monitoring systems cannot handle large-scale analysis in real time because the query performance is too slow. After evaluating and rejecting several disk-based and existing in-memory cache solutions, Facebook turned their attention to writing their own in-memory TSDB to power the health and performance monitoring system at Facebook. They presented “Gorilla: A Fast, Scalable, In-Memory Time Series Database (http://www.vldb.org/pvldb/vol8/p1816-teller.pdf)” at VLDB 2015.

In December 2016, they open sourced the majority of that work with Beringei (https://github.com/facebookincubator/beringei). In this talk, Justin started by presenting how Facebook uses this database to serve production monitoring workloads at Facebook, with an overview of how they use it as the basis for a disaster-ready, high-performance distributed system. He closed by presenting some new performance analysis comparing (favorably) Beringei to Prometheus. Prometheus is an open source TSDB whose time series compression was inspired by the Gorilla VLDB paper and has similar compression behavior.

After the talk, Justin was kind enough to speak briefly with me. Check it out:

It’s been a great conference, and we’re looking forward to seeing you all at Percona Live Europe!

Apr
27
2017
--

Percona Live 2017: Hawkular Metrics, An Overview

Hawkular Metrics

Hawkular MetricsThe place is still frantic here at Percona Live 2017 as everybody tries to squeeze in a few more talks before the end of the day. One such talk was given by Red Hat’s Stefan Negrea on Hawkular Metrics.

Hawkular Metrics is a scalable, long-term, high-performance storage engine for metric data. The session was an overview of the project that includes the history of the project, an overview of the Hawkular ecosystem, technical details of the project, developer features and APIs and third party integrations.

Hawkular Metrics is backed by Cassandra for scalability. Hawkular Metrics is used and exposed by Hawkular Services.The API uses JSON to communicate with clients.

Users of Hawkular Metrics include:

  • IoT enthusiasts who need to collect metrics, and possibly trigger alerts
  • Operators who are looking for a solution to store metrics from statsD, collectd, syslog
  • Developers of solutions who need long-term time series database storage
  • Users of ManageIQ who are looking for Middleware management
  • Users of Kubernetes/Heapster who want to store Docker container metrics in a long-term time series database storage, thanks to the Heapster sink for Hawkular.

Stefan was kind enough to speak with me after the talk. Check it out below:

There are more talks today. Check out Thursday’s schedule here. Don’t forget to attend the Closing Remarks and prize give away at 4:00 pm.

Apr
27
2017
--

Microsoft meets expectations with $23.6B in revenue, Azure revenue up 93%

 Microsoft just reported earnings for the last quarter. The company reported non-GAAP revenue of $23.6 billion and non-GAAP earnings per share of $0.73. Wall Street’s cadre of crack analysts expected the company’s earnings per share to come in at around $0.70, with revenue hitting about $23.6 billion. In the year-ago quarter, Microsoft reported earnings per share of $0.62. Wall… Read More

Apr
27
2017
--

Percona Live 2017: Lessons Learned While Automating MySQL Deployments in the AWS Cloud

Percona Live 2017

Automating MySQLThe last day of Percona Live 2017 is still going strong, with talks all the way until 4:00 pm (and closing remarks and a prize giveaway on the main stage then). I’m going to a few more sessions today, including one from Stephane Combaudon from Slice Technologies: Lessons learned while automating MySQL deployments in the AWS Cloud.

In this talk, Stephane discussed how automating deployments is a key success factor in the cloud. It is actually a great way to leverage the flexibility of the cloud. But often while automation is not too difficult for application code, it is much harder for databases. When Slice started automating their MySQL servers at Slice, they chose simple and production-proven components: Chef to deploy files, MHA for high availability and Percona XtraBackup for backups. But they faced several problems very quickly:

  • How do you maintain an updated list of MySQL servers in the MHA configuration when servers can be automatically stopped or started?
  • How can you coordinate your servers for them to know that they need to be configured as a master or as a replica?
  • How do you write complex logic with Chef without being trapped with Chef’s two pass model?
  • How can you handle clusters with different MySQL versions, or a single cluster where all members do not use the same MySQL version?
  • How can you get reasonable backup and restore time when the dataset is over 1TB and the backups are stored on S3?

This session discussed the errors Slice made, and the solutions they found while tackling MySQL automation.

Stephane was kind enough to let me speak with him after the talk: check it out below:

There are more talks today. Check out Thursday’s schedule here. Don’t forget to attend the Closing Remarks and prize give away at 4:00 pm.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com