Aug
14
2017
--

Amazon Macie helps businesses protect their sensitive data in the cloud

 Amazon’s AWS cloud computing service hosted its annual NY Summit today and it used the event to launch a new service: Amazon Macie. The idea behind Macie is to use machine learning to help businesses protect their sensitive data in the cloud. Read More

Aug
09
2017
--

AWS just proved why standards drive technology platforms

 When AWS today became a full-fledged member of the container standards body, the Cloud Native Computing Foundation, it represented a significant milestone. By joining Google, IBM, Microsoft, Red Hat and just about every company that matters in the space, AWS has acknowledged that when it comes to container management, standards matter. Read More

Jul
27
2017
--

It looks like Amazon would be losing a lot of money if not for AWS

 Amazon reported its second-quarter earnings today, and it was a bit of a whiff — and a bummer for Jeff Bezos, who is now no longer the solar system’s richest human and has been relegated to the unfortunate position of second-richest human. Read More

Jun
26
2017
--

Amazon said to be working on translation services for AWS customers

 Amazon is working on an offering that would allow developers building apps and websites using AWS to translate their content to multiple languages, CNBC reports. The machine translation tech used to provide the multi-lingual versions of client products would be based on tech Amazon uses across its own products, the report claims. Translation services are a key competitive offering for Amazon… Read More

Apr
28
2017
--

From Percona Live 2017: Thank You, Attendees!

Percona Live 2017

Percona Live 2017From everyone at Percona and Percona Live 2017, we’d like to send a big thank you to all our sponsors, exhibitors, and attendees at this year’s conference.

This year’s conference was an outstanding success! The event brought the open source database community together, with a technical emphasis on the core topics of MySQL, MariaDB, MongoDB, PostgreSQL, AWS, RocksDB, time series, monitoring and other open source database technologies.

We will be posting tutorial and session presentation slides at the Percona Live site, and all of them should be available shortly. 

Highlights This Year:

Thanks to Our Sponsors!

We would like to thank all of our valuable event sponsors, especially our diamond sponsors Continuent and VividCortex – your participation really makes the show happen.

We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.

Download a prospectus here.

Percona Live Europe 2017Percona Live Europe 2017: Dublin, Ireland!

This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We look forward to seeing you there!

Apr
27
2017
--

Percona Live 2017: Lessons Learned While Automating MySQL Deployments in the AWS Cloud

Percona Live 2017

Automating MySQLThe last day of Percona Live 2017 is still going strong, with talks all the way until 4:00 pm (and closing remarks and a prize giveaway on the main stage then). I’m going to a few more sessions today, including one from Stephane Combaudon from Slice Technologies: Lessons learned while automating MySQL deployments in the AWS Cloud.

In this talk, Stephane discussed how automating deployments is a key success factor in the cloud. It is actually a great way to leverage the flexibility of the cloud. But often while automation is not too difficult for application code, it is much harder for databases. When Slice started automating their MySQL servers at Slice, they chose simple and production-proven components: Chef to deploy files, MHA for high availability and Percona XtraBackup for backups. But they faced several problems very quickly:

  • How do you maintain an updated list of MySQL servers in the MHA configuration when servers can be automatically stopped or started?
  • How can you coordinate your servers for them to know that they need to be configured as a master or as a replica?
  • How do you write complex logic with Chef without being trapped with Chef’s two pass model?
  • How can you handle clusters with different MySQL versions, or a single cluster where all members do not use the same MySQL version?
  • How can you get reasonable backup and restore time when the dataset is over 1TB and the backups are stored on S3?

This session discussed the errors Slice made, and the solutions they found while tackling MySQL automation.

Stephane was kind enough to let me speak with him after the talk: check it out below:

There are more talks today. Check out Thursday’s schedule here. Don’t forget to attend the Closing Remarks and prize give away at 4:00 pm.

Apr
25
2017
--

Backup service Rubrik now works natively in AWS and Azure

Data flying over group of laptops to illustrate data integration/sharing. Rubrik, the startup that provides data management services like backup and recovery to large enterprises, is in the process of raising between $150 million and $200 million on a valuation of $1 billion, as we reported yesterday. And as a measure of how it’s growing, today it’s announcing an expansion of its product set, specifically in cloud services.
Now Rubrik — which… Read More

Mar
28
2017
--

AWS launches Amazon Connect, productizes Amazon’s in-house contact center software

 AWS continues to add yet more software and services to build out its revenues and touchpoints with businesses that already use its cloud infrastructure for storage and to host and administer services and apps. The latest product, launching today, is Amazon Connect, a cloud-based contact center solution. AWS said it is based on the same tech that Amazon itself has built and uses in-house… Read More

Mar
06
2017
--

MySQL, –i-am-a-dummy!

--I-am-a-dummyIn this blog post, we’ll look at how “operator error” can cause serious problems (like the one we saw last week with AWS), and how to avoid them in MySQL using

--i-am-a-dummy

.

Recently, AWS had some serious downtime in their East region, which they explained as the consequence of a bad deployment. It seems like most of the Internet was affected in one way or another. Some on Twitter dubbed it “S3 Dependency Awareness Day.”

Since the outage, many companies (especially Amazon!) are reviewing their production access and deployment procedures. It would be a lie if I claimed I’ve never made a mistake in production. In fact, I would be afraid of working with someone who claims to have never made a mistake in a production environment.

Making a mistake or two is how you learn to have a full sense of fear when you start typing:

UPDATE t1 SET c1='x' ...

I think many of us have experienced forehead sweats and hand shaking in these cases – they save us from major mistakes!

The good news is that MySQL can help you with this. All you have to do is admit that you are human, and use the following command (you can also set this in your user directory .my.cnf):

mysql --i-am-a-dummy

Using this command (also known as safe-updates) sets the following SQL mode when logging into the server:

SET sql_safe_updates=1, sql_select_limit=1000, max_join_size=1000000;

The safe-updates and iam-a-dummy flags were introduced together in MySQL 3.23.11, and according to some sites from around the time of release, it’s “for users that once may have done a DELETE FROM table_name but forgot the WHERE clause.”

What this does is ensure you can’t perform an UPDATE or DELETE without a WHERE clause. This is great because it forces you to think through what you are doing. If you still want to update the whole table, you need to do something like WHERE ID > 0. Interestingly, safe-updates also blocks the use of WHERE 1, which means “where true” (or basically everything).

The other safety you get with this option is that SELECT is automatically limited to 1000 rows, and JOIN is limited to examining 1 million rows. You can override these latter limits with extra flags, such as:

--select_limit=500 --max_join_size=10000

I have added this to the .my.cnf on my own servers, and definitely use this with my clients.

Mar
01
2017
--

The day Amazon S3 storage stood still

Jeff Bezoz, CEO of Amazon. By now you’ve probably heard that Amazon’s S3 storage service went down in its Northern Virginia datacenter for the better part of 4 hours yesterday, and took parts of a bunch of prominent websites and services with it. It’s worth noting that as of this morning, the Amazon dashboard was showing everything was operating normally. While yesterday’s outage was a big deal… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com