Oct
26
2021
--

Junior Fishing Chair For Kids

Want to give your kids memories they will cherish for life? Then it would help if you tried bringing them on a fishing trip. You can make it a family trip or go fishing alone with your children. Either way, you will be able to relax and enjoy the beautiful sights of nature.

Plus, you’ll also be teaching them valuable lessons about life. But what if they are too old to sit on your lap while fishing? You might want to invest in a junior fishing chair for kids.

Imagine the joy on their face when they can sit with you while fishing. They will have a great time bonding with you through this activity. Of course, it wouldn’t hurt if you brought them a comfortable chair for kids.

Tips When Buying a Kids’ Fishing Chair

Here are some tips on how to choose the right junior fishing chair for kids:

Tip #1: Size and Weight

The first thing that you should consider is the size of the junior fishing chair. You should be able to find a chair that fits your child perfectly. After all, their comfort will have a significant impact on how they enjoy the trip. Make sure that it can hold your child’s weight, too.

Tip #2: Age and Weight Limit

The next thing that you should do is look into the age limit of the junior fishing chair. You don’t want to bring a chair that’s made for older children on a family trip with younger kids. This can be very dangerous as one child may not have the capacity to sit on a chair.

Tip #3: Material Used

It’s also important to look into the material used for junior fishing chairs. After all, you want to be sure that it can bear your child’s weight. You may want to get a plastic or metal chair if you plan to bring your kids along with you on family trips.

Tip #4: Price

The price of junior fishing chairs will also depend on their weight limit and features. You may want to get a chair that will last for many years, so it’s best to choose the most durable one out there.

Recommended Fishing Chairs for Kids

We have visited Amazon.com and found some fishing chairs for kids we can recommend. One for girls, one for boys, and one for both.

Please note that the prices displayed on Amazon.com are subject to change at any time. If you want to know the price, you have to click the links attached to that paragraph.

For Girls: Ultralight Backpack Cooler Chair – Compact Lightweight and Portable Folding Stool for Kids

This is a lightweight fishing chair for girls. You can carry it anywhere you go, and its backpack design makes it easy to bring along. It only weighs 3 pounds, so even young children can carry it around with ease. In addition, the compact size means that this chair won’t take up too much space inside your car or house.

Check the price on Amazon.com (Sponsored)

For Boys: Backpack Cooler Chair, Portable Fishing Chair for Outdoor Activities

A fishing chair for boys should be sturdy, comfortable, and easy to carry around. This is exactly what this chair offers! It is lightweight, so your boy will have no trouble carrying it on his back. Plus, its size won’t take up too much space inside your car or home.

Check the price on Amazon.com (Sponsored)

For Both: Kelsyus Kids Outdoor Canopy Chair

Why not get a fishing chair for kids that they can enjoy with you? Kelsyus Kids Outdoor Canopy Chair is designed to fit children of all ages, and it has a canopy where your child can put his drink and snacks. The frame is made from steel, making it sturdy enough to endure the outdoor environment. It also folds up quickly, so you won’t have any trouble bringing it along on family trips.

Check the price on Amazon.com (Sponsored)

Where Can I Find Fishing Chairs for Kids?

You can find fishing chairs for kids at major department stores, sporting goods retailers, and even grocery stores. You may also want to check out Amazon.com as they offer some of the best deals online.

Final Thoughts

What better way to bond with your child than fishing? The right junior fishing chair can make all the difference, so you need to choose carefully. Follow our tips above, and you’re sure to find the perfect one for your family!

The post Junior Fishing Chair For Kids appeared first on Comfy Bummy.

Oct
25
2021
--

Talking Drupal #318 – DDEV with Randy Fay

Today we are talking about DDev with Randy Fay

TalkingDrupal.com/318

Topics

  • John – TD youtube and linkedin
  • AmyJune – Getting ready for winter and music shows
  • Randy – managing hospice for mother
  • Nic – House cleanouts and lego
  • DDEV elevator pitch
  • Beginning of DDEV
  • Development Process
  • Team
  • Docker use cases
  • Listener Marc van Gend “I’d love to hear about the current state of the project, given the change of ownership. Is it healthy financially?”
  • Listener Josh Miller “Randy has a long history with Drupal, outdating most. https://drupal.org/u/rfay a little over 16 years. How did his contribution start, and how has it changed? Does he still build with Drupal?”
  • Roadmap
  • Listener Stephen Cross “I easily got DDEV running on Raspberry Pi, while other tools do not run simply. How has ARM adoption been?”
  • Josh on Twitter “Will docker work on new processors?”

Resources

Compu-Home Systems TomorrowHouse (1980’s Home Automation run by an Apple II Randy and Nancy’s 2.5-year bike trip through the Americas Ddev docs DDEV Github

What’s So Different about DDEV? #ddev for tweets Talking Drupal on YouTube

Guests

Randy Fay- http://randyfay.com @randyfay

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi AmyJune – @volkswagenchick

MOTW

Recaptcha Uses the Google reCAPTCHA web service to improve the CAPTCHA system. It is tough on bots and easy on humans.

reCAPTCHA is built for security. Armed with state of the art technology, it always stays at the forefront of spam and abuse fighting trends. reCAPTCHA is on guard for you, so you can rest easy.

Oct
25
2021
--

Papasan chair for kids

If you’re looking for a fun and funky piece of furniture to add to your child’s room, why not take a look at the Papasan chair. An Indonesian chair similar in design to a bowl, this funky piece of furniture will add bright colors to your child’s room.

What is Papasan chair?

A Papasan chair is a small-sized rounded flat bottom circular woven wicker chair, often with a cushion. The word “Papasan” is from two words in the Indonesian language that describe something round and sitting. The traditional Papasan chairs are woven from rattan; however, modern variations can be found made from any sort of fabric, including cotton or leather. The traditional style is woven in the same way as a basket, therefore, making it quite sturdy.

This type of chair is specifically designed for people looking to relax or sleep in the chair. When the Papasan chair was invented, it is believed that they were created to aid people who suffered from sleep and nervous disorders, and anxiety.

Can kids use Papasan chairs?

It is definitely possible to use this chair by kids. This chair is ideal for children who like to sit and read, listen to music, play video games or watch TV. The traditional Papasan chair can be challenging to climb into for smaller children; however, the modern versions of the Papasan chair are much lower than the traditional ones.

The modern version of the chair has an oblong rattan or metal frame round in shape, with a plush cushion. A child can easily climb into this type of chair, whereas climbing into the traditional Papasan chair may prove to be more difficult.

What are the benefits of using Papasan chairs for kids?

Papasan chairs are great for kids because they offer a lot of benefits. Firstly, Papasan chairs can be pretty expensive, and this type of chair is ideal for the child in your life who has everything and requires nothing.

Other than that, it’s an excellent piece to accompany any bedroom furniture, especially if you’re looking to add something vibrant and fun.

They are incredibly comfortable, which means that kids can easily curl up with a good book, take a nap or even read the newspaper in this chair.

Kids love to lounge about and relax; therefore, if you’re looking for them to sit still for any amount of time without squirming around like madmen, having something comfortable is an advantage.

What’s the best Papasan chair?

There are a handful of options available when it comes to Papasan chairs. However, not all of them provide the same quality.

One of the best options, in my opinion, is the OSP Home Furnishings Wicker Papasan Chair. (Sponsored link)

This chair provides a comfortable sitting experience with its generous cushioning. It is made from the best materials that are highly durable. With this chair, you get your money’s worth.

The OSP Home Furnishing Papasan Chair’s 360-Degree Swivel is what makes it so unique. Thanks to that, you and your kid to spin without resistance or noise, allowing you to move about freely.

The post Papasan chair for kids appeared first on Comfy Bummy.

Oct
25
2021
--

Why Seating Is Important For Emotional Behavioral Disabilities

Here at Comfybummy.com, we know the importance of comfortable seating for kids. It helps them develop a good attitude and can help their spine grow correctly.

In 2020, the researcher Corinne E. Bloom Williams at the University of South Florida conducted a study where she evaluated the use of alternative seating with kids at risk for emotional, behavioral disabilities.

Looking at the results of her study, we believe even the researcher got caught by surprise by the results.

The Method She Used

She did the study with three groups of kids, ages 4-8.

Group A sat on a standard classroom seat or chair that is usually found in the classroom. This group didn’t have any additional seating method to help them be comfortable or sit correctly because they had been used to sitting like this since kindergarten.

Group B had an office chair with back support but no armrests. This helped build up their back muscles and spine correctly while they sat in the chair, which was effective with older kids (like adults).

Group C had an office chair with back support and armrests; however, instead of using the standard form of armrests that generally come with office chairs, they had unique armrests that didn’t make contact with the arms. Instead, it had a metal object at the end of each armrest about 10 cm long and very thin. When placed between the arm and chest while leaning on the chair back, it helped open up their chests for better breathing.

The Most Remarkable Results

As part of this study, three teachers were asked to give their feedback using this scale:

  1. Strongly Disagree
  2. Somewhat Disagree
  3. Neither Agree nor Disagree
  4. Somewhat Agree
  5. Strongly Agree

When faced with the following claim, the teachers averaged 1.3/5, which indicate that they strongly disagreed:

“My students do not have problems with staying in their seats and being on task when seated in a typical classroom chair.”

Stability Stools

  • Stability Stools helped my students focus on their task (Average: 4.3/5)
  • My students were able to stay seated longer when seated on the stability stool (Average: 4.3/5)
  • I would use stability stools in my classroom (Average: 5/5)

Scoop Rocker Chair had the same results as the Stability Stools.

The Conclusion Of The Study

From this study, we’ve learned that having the correct seating can change children’s moods and prevent emotional, behavioral disabilities. If you need to buy furniture for your child, think about their comfort first so they’ll be able to develop good habits.

Your child’s spine will thank you!

If you want to learn more about this study, you can download the PDF here.

The post Why Seating Is Important For Emotional Behavioral Disabilities appeared first on Comfy Bummy.

Oct
21
2021
--

Talking Drupal #317 – Govcon Keynote Non-Code Contribution: Using your passion and skills to power open source.

Today we are talking about Non-Code Contribution

TalkingDrupal.com/317

Topics

  • What is Talking Drupal
    • Podcast with audio and video
  • We recorded our 300th episode in June, over 175 guests, 700K audio downloads
  • Weekly episodes covering a variety of topics
  • Most recent 315 with Tara King, Director of Developer Relations at automatic, Comparing Drupal and WordPress Communities
  • Visit www.talkingdrupal.com
  • This may be a different keynote than you are accustomed to. Talking Drupal is a discussion, and that’s what we are having today.
  • Today we are talking about Non-Code Contribution: Using your passion and skills to power open source.
  • What is contribution in an open-source project?
  • Providing your time, skills or resources to benefit the project
  • Today we’re talking about non code contributions
  • Early on contribution was considered writing code
  • Over time we have learned to value non-code contributions just as much as code contrib
  • Rather than defining non-code contribution by what it is not, we need a term to define it by what it is
  • Community is built in meetups, camps, and cons
  • Majority of contribution has nothing to do with coding at a camp
    • Attending
    • Speaking
    • Training
    • Organizing
  • Organizing a camp (NEDCamp.org / Nov 19th)
  • Volunteering at a camp
    • Stephen – Sponsorship, lead for many years
    • Nic – Website & Signage
    • John – Current Lead, Day of Logistics, Venue coordination
  • Some other examples of contribution
    • Mentorship
    • Documentation
    • Training
    • Summits
    • Being on a committee/Board
    • Answering questions in issue queue
    • Answering questions in slack
  • Who is a contributor?
  • Is it a self designation or a community designation?
  • Why would you contribute?
    • Contribution is a relationship
    • Give and receive
    • Makes you feel good
    • Benefit Skills
      • Technical
      • Communication
      • Project Management
    • Benefit Career
      • Skills
      • Visibility
      • Building Personal Network
      • Networking at Events
    • Financial Compensation
  • Contribute does not always mean nights and weekends
    • Usually starts that way
  • Contribute as part of your job
    • Employers are open supporting open source, there are benefits got both company and employee
    • Contribute to external project or contribute internal project to open source
  • Will your company support your time to make NCC
    • 315 we learned about WordPress’ contribution goals
    • Launched in 2014, Five for the Future encourages organizations to contribute five percent of their resources to WordPress development.
  • Government
    • 2016 Federal Source Code Policy
    • Support for open source usage, encourage sharing across agencies
    • 20 percent created code should be open source
  • Start the Dialog with your company
  • Why do we contribute – Contribution can be personal like donating to your favorite charity or playing your favorite game.
    • Nic
      • I was asked
      • I enjoy giving back
      • Helps my career
    • Stephen
      • Sharing and Learning
    • John
      • To help people and solve technical challenges for people
      • Education and knowledge sharing
      • To support something larger than myself / make the world a better place
  • How did TD Start
    • Long before Joe Rogans podcast deal with Spotify of 100 million
    • 2008 – With Liberty and Justice for All – 5th grader
    • Obama McCain
    • 7 episodes
    • Mechanics of podcasting,work involved with pre and post record production
    • Virtual book club with Jason Pamental – pick a web design book, assign weekly chapters, Google Hangout
    • Like to learn – similar Drupal journeys – makings of an interesting podcast… great reason to talk every week
  • When did we start considering it a contribution? When did we start giving contribution credits on Drupal.org
  • How did Talking Drupal come to be a non-code contribution?
    • It always was a non-code contribution, we didn’t consider it at first because the Drupal community was code focused.
    • Drupal.org Credit for TD started 20 November 2020
  • Community Projects
    • When did the drupal community start supporting NCC
  • Why is this important
  • How has the show & other non-code contribution impacted our lives / careers
    • Stephen
      • Friendships
      • Have helped others
    • Nic
      • Friendships
      • Clients
    • John
      • Connections – Hey you are that guy
      • Given me a sense of value
      • Gives me a sense of supporting the community
  • Why are non-code contributions important
    • As valuable to the health of a project as code contributions.
    • There are non-code requirements for all projects
    • Not everyone is a developer/coder
    • Get’s more people with a variety of skills involved in the community
    • Moves open source forward
  • Challenges of Contributing
    • Contribution Imposter Syndrome
    • My Contribution isn’t valuable
    • Dealing with concerns that it’s not helpful
  • Focus on your skills and passions
  • Work, life, contribution balance
    • Work it into your work
    • Build a career based on contribution
    • Contrib doesn’t have to be Nights and Weekends
    • Add 30 min to the start or end of your day
    • If you do tackle one thing a night
    • Provide contrib during your workday
  • Sustainability
  • Projects are easy to do short time,
    • Energy is high
    • Newness interesting
  • Most podcast don’t make it past 8 episodes
    • Long term is a challenge
    • Pre-show guest scheduling, content planning, shownotes
    • Post Production audio and video
    • Release and marketing
    • 1 hour show = 6 – 8 hours
  • Priorities and interests change over time
  • NCC easier to transition in and out
    • Had to make transition out of my primary roles and I did that, projects have thrived in those transitions
  • Be honest with yourself
  • How to get involved / How to contribute
    • Just get started
    • Look at your skill set
    • Look at your interests
    • Ask in the issue queue or drupal slack for a starting point
    • You can also reach out to most camp organizers for recommendations
  • Takeaways
    • John
      • Anyone can and everyone should contribute
    • Stephen
      • Your contribution is valuable
    • Nic
      • Code and non code are equal to the long term health of the project

Resources

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Stephen Cross – @stephencross

Oct
18
2021
--

Talking Drupal #316 – Accessibility

Today we are talking about Accessibility with Rain Breaw Michaels.

TalkingDrupal.com/316

Topics

  • We are talking about Accessibility today, but specifically gearing the conversation towards developers. Why?
  • So many people when coding for accessibility simply provide workarounds. Why can this be dangerous?
  • What is ARIA?
  • What is meant by the term Landmarks?
  • How does this apply to dev’s working with Drupal?
  • What modules do you recommend?
  • How can you help content editors maintain accessibility while adding content?
  • What are common pitfalls you see devs make
  • How can Javascript help or hurt Accessibility?
  • In closing, is there anything you would like to add?

Module of the Week

Anti Spam by CleanTalk

Resources

Clean Talk – Geerling’s Post Anti-spam by Clean Talk Acquia Engage Time Timer Driven to Distraction Design of everyday things (Book) ColorCube – Color Testing Tool ARIA WAI-ARIA HTML 5 Landmarks Module for decorative images Editoria11y Layout Builder Content Strategy for the Web PDF Accessibility course on Deque VPAT Hanlon’s Razor: “never attribute to malice that which is adequately explained by stupidity”

Guest

Rain Breaw Michaels – @rainbreaw

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Chad Hester – www.chadkhester.com @chadkhester

Oct
15
2021
--

Comparing Graviton (ARM) Performance to Intel and AMD for MySQL

Comparing Graviton (ARM) performance to Intel and AMD for MySQL

Comparing Graviton (ARM) performance to Intel and AMD for MySQLRecently, AWS presented its own CPU on ARM architecture for server solutions.

It was Graviton. As a result, they update some lines of their EC2 instances with new postfix “g” (e.g. m6g.small, r5g.nano, etc.). In their review and presentation, AWS showed impressive results that it is faster in some benchmarks up to 20 percent. On the other hand, some reviewers said that Graviton does not show any significant results and, in some cases, showed fewer performance results than Intel.

We decided to investigate it and do our research regarding Graviton performance, comparing it with other CPUs (Intel and AMD) directly for MySQL.

Disclaimer

  1. The test is designed to be CPU bound only, so we will use a read-only test and make sure there is no I/O activity during the test.
  2. Tests were run  on m5.* (Intel) , m5a.* (AMD),  m6g.*(Graviton) EC2 instances in the US-EAST-1 region. (List of EC2 see in the appendix).
  3. Monitoring was done with Percona Monitoring and Management (PMM).
  4. OS: Ubuntu 20.04 TLS.
  5. Load tool (sysbench) and target DB (MySQL) installed on the same EC2 instance.
  6. MySQL– 8.0.26-0 — installed from official packages.
  7. Load tool: sysbench —  1.0.18.
  8. innodb_buffer_pool_size=80% of available RAM.
  9. Test duration is five minutes for each thread and then 90 seconds warm down before the next iteration. 
  10. Tests were run three times (to smooth outliers or have more reproducible results), then results were averaged for graphs. 
  11. We are going to use high concurrency scenarios for those scenarios when the number of threads would be bigger than the number of vCPU. And low concurrent scenario with scenarios where the number of threads would be less or equal to a number of vCPU on EC2.
  12. Scripts to reproduce results on our GitHub.

Test Case

Prerequisite:

1. Create DB with 10 tables with 10 000 000 rows each table

sysbench oltp_read_only --threads=10 --mysql-user=sbtest --mysql-password=sbtest --table-size=10000000 --tables=10 --db-driver=mysql --mysql-db=sbtest prepare

2. Load all data to LOAD_buffer

sysbench oltp_read_only --time=300 --threads=10 --table-size=1000000 --mysql-user=sbtest --mysql-password=sbtest --db-driver=mysql --mysql-db=sbtest run

Test:

Run in a loop for same scenario but  different concurrency THREAD (1,2,4,8,16,32,64,128) on each EC2.

sysbench oltp_read_only --time=300 --threads=${THREAD} --table-size=100000 --mysql-user=sbtest --mysql-password=sbtest --db-driver=mysql --mysql-db=sbtest run

Results:

Result reviewing was split into 3 parts:

  1. for “small” EC2 with 2, 4, and 8 vCPU
  2. for “medium” EC2 with 16 and 32 vCPU
  3. for  “large” EC2 with 48 and 64 vCPU

This “small”, “medium”, and “large” splitting is just synthetic names for better reviewing depends on the amount of vCPu per EC2
There would be four graphs for each test:

  1. Throughput (Queries per second) that EC2 could perform for each scenario (amount of threads)
  2. Latency 95 percentile that  EC2 could perform for each scenario (amount of threads)
  3. Relative comparing Graviton and Intel
  4. Absolute comparing Graviton and Intel

Validation that all load goes to CPU, not to DISK I/O, was done also using PMM (Percona Monitoring and Management). 

pic 0.1. OS monitoring during all test stages

pic 0.1 – OS monitoring during all test stages

From pic.0.1, we can see that there was no DISK I/O activity during tests, only CPU activity. The main activity with disks was during the DB creation stage.

Result for EC2 with 2, 4, and 8 vCPU

plot 1.1. Throughput (queries per second) for EC2 with 2, 4, and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.2.  Latencies (95 percentile) during the test for EC2 with 2, 4, and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4, and 8  vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2, 4, and 8 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

OVERVIEW:

  1. AMD has the biggest latencies in all scenarios and for all EC2 instances. We won’t repeat this information in all future overviews, and this is the reason why we exclude it in comparing with other CPUs in percentage and numbers values (in plots 1.3 and 1.4, etc).
  2. Instances with two and four vCPU Intel show some advantage for less than 10 percent in all scenarios.
  3. However, an instance with 8 vCPU intel shows an advantage only on scenarios with threads that less or equal amount of vCPU on EC2.
  4. On EC2 with eight vCPU, Graviton started to show an advantage. It shows some good results in scenarios when the number of threads is more than the amount of vCPU on EC2. It grows up to 15 percent in high-concurrency scenarios with 64 and 128 threads, which are 8 and 16 times bigger than the amount of vCPU available for performing.
  5. Graviton start showing an advantage on EC2 with eight vCPU and with scenarios when threads are more than vCPU amount. This feature would appear in all future scenarios – more load than CPU, better result it shows.

 

Result for EC2 with 16 and 32 vCPU

plot 2.1.  Throughput (queries per second)  for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 1.2.  Latencies (95 percentile) during the test for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 2.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 2.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 16 and 32 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

OVERVIEW:

  1. In scenarios with the same load for ec2 with 16 and 32 vCPU, Graviton is continuing to have advantages when the amount of threads is more significant than the amount of available vCPU on instances.
  2. Graviton shows an advantage of up to 10 percent in high concurrency scenarios. However, Intel has up to 20 percent in low concurrency scenarios.
  3. In high-concurrency scenarios, Graviton could show an incredible difference in the number of (read) transactions per second up to 30 000 TPS.

Result for EC2 with 48 and 64 vCPU

plot 3.1.  Throughput (queries per second)  for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 3.2.  Latencies (95 percentile) during the test for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 3.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 3.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

OVERVIEW:

  1. It looks like Intel shows a significant advantage in most scenarios when its number of threads is less or equal to the amount of vCPU. It seems Intel is really good for such kind of task. When it has some additional free vCPU, it would be better, and this advantage could be up to 35 percent.
  2. However, Graviton shows outstanding results when the amount of threads is larger than the amount of vCPU. It shows an advantage from 5 to 14 percent over Intel.
  3. In real numbers, Graviton advantage could be up to 70 000 transactions per second over Intel performance in high-concurrency scenarios.

Total Result Overview

 

plot 4.2.  Latencies (95 percentile) during the test for EC2 with 2,4,8,16,32,48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 4.3.  Percentage comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2,4,8,16,32,48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

 

plot 4.4.  Numbers comparison Graviton and Intel CPU in throughput (queries per second) for EC2 with 2,4,8,16,32,48 and 64 vCPU for scenarios with 1,2,4,8,16,32,64,128 threads

Conclusions

  1. ARM CPUs show better results on EC2 with more vCPU and with higher load, especially in high-concurrency scenarios.
  2. As a result of small EC2 instances and small load, ARM CPUs show less impressive performance. So we can’t see its benefits comparing with Intel EC2
  3. Intel is still the leader in the area of low-concurrency scenarios. And it is definitely winning on EC2 with a small amount of vCPU.
  4. AMD does not show any competitive results in all cases.   

Final Thoughts

  1. AMD — we have a lot of questions about EC2 instances on AMD. So it would be a good idea to check what was going on that EC2 during the test and check the general performance of CPUs on those EC2.
  2. We found out that in some specific conditions, Intel and Graviton could compete with each other. But the other side of the coin is economical. What is cheaper to use in each situation? The next article will be about it. 
  3. It would be a good idea to try to use EC2 with Graviton for real high-concurrency DB.  
  4. It seems it needs to run some additional scenarios with 256 and 512 threads to check the hypothesis that Graviton could work better when threads are more than vCPU.

APPENDIX:

List of EC2 used in research:

CPU type EC2 EC2 price per hour (USD) vCPU RAM
Graviton m6g.large 0.077 2 8 Gb
Graviton m6g.xlarge 0.154 4 16 Gb
Graviton m6g.2xlarge 0.308 8 32 Gb
Graviton m6g.4xlarge 0.616 16 64 Gb
Graviton m6g.8xlarge 1.232 32 128 Gb
Graviton m6g.12xlarge 1.848 48 192 Gb
Graviton m6g.16xlarge 2.464 64 256 Gb
Intel m5.large 0.096 2 8 Gb
Intel m5.xlarge 0.192 4 16 Gb
Intel m5.2xlarge 0.384 8 32 Gb
Intel m5.4xlarge 0.768 16 64 Gb
Intel m5.8xlarge 1.536 32 128 Gb
Intel m5.12xlarge 2.304 48 192 Gb
Intel m5.16xlarge 3.072 64 256 Gb
AMD m5a.large 0.086 2 8 Gb
AMD m5a.xlarge 0.172 4 16 Gb
AMD m5a.2xlarge 0.344 8 32 Gb
AMD m5a.4xlarge 0.688 16 64 Gb
AMD m5a.8xlarge 1.376 32 128 Gb
AMD m5a.12xlarge 2.064 48 192 Gb
AMD m5a.16xlarge 2.752 64 256 Gb

 

my.cnf

my.cnf:
[mysqld]
ssl=0
performance_schema=OFF
skip_log_bin
server_id = 7

# general
table_open_cache = 200000
table_open_cache_instances=64
back_log=3500
max_connections=4000
 join_buffer_size=256K
 sort_buffer_size=256K

# files
innodb_file_per_table
innodb_log_file_size=2G
innodb_log_files_in_group=2
innodb_open_files=4000

# buffers
innodb_buffer_pool_size=${80%_OF_RAM}
innodb_buffer_pool_instances=8
innodb_page_cleaners=8
innodb_log_buffer_size=64M

default_storage_engine=InnoDB
innodb_flush_log_at_trx_commit  = 1
innodb_doublewrite= 1
innodb_flush_method= O_DIRECT
innodb_file_per_table= 1
innodb_io_capacity=2000
innodb_io_capacity_max=4000
innodb_flush_neighbors=0
max_prepared_stmt_count=1000000 
bind_address = 0.0.0.0
[client]

 

Oct
14
2021
--

Percona Is a Finalist for Best Use of Open Source Technologies in 2021!

Percona Finalist Open Source

Percona has been named a finalist in the Computing Technology Product Awards for Best Use of Open Source Technologies. If you’re a customer, partner, or just a fan of Percona and what we stand for, we’d love your vote.

With Great Power…

You know the phrase. We’re leaving it to you and your peers in the tech world to push us to the top.

Computing’s Technology Product Awards are open to a public vote until October 29. Vote Here!

percona Best Use of Open Source Technologies

Thank you for supporting excellence in the open source database industry. We look forward to the awards ceremony on Friday, November 26, 2021.

Why We’re an Open Source Finalist

A contributing factor to our success has been Percona Monitoring and Management (PMM), an open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical MySQL, MongoDB, PostgreSQL, and MariaDB database environments, no matter where they are located or deployed. It’s impressing customers, and even competitors, in the industry.

If you want to see how Percona became a finalist, learn more about Percona Monitoring and Management, and be sure to follow @Percona on all platforms.

Vote Today!

Oct
14
2021
--

Custom Percona Monitoring and Management Metrics in MySQL and PostgreSQL

mysql postgresl custom metrics

mysql postgresl custom metricsA few weeks ago we did a live stream talking about Percona Monitoring and Management (PMM) and showcased some of the fun things we were doing at the OSS Summit.  During the live stream, we tried to enable some custom queries to track the number of comments being added to our movie database example.  We ran into a bit of a problem live and did not get it to work. As a result, I wanted to follow up and show you all how to add your own custom metrics to PMM and show you some gotchas to avoid when building them.

Custom metrics are defined in a file deployed on each server you are monitoring (not on the server itself).  You can add custom metrics by navigating over to one of the following:

  • For MySQL:  /usr/local/percona/pmm2/collectors/custom-queries/mysql
  • For PostgreSQL:  /usr/local/percona/pmm2/collectors/custom-queries/postgresql
  • For MongoDB:  This feature is not yet available – stay tuned!

You will notice the following directories under each directory:

  • high-resolution/  – every 5 seconds
  • medium-resolution/ – every 10 seconds
  • low-resolution/ – every 60 seconds

Note you can change the frequency of the default metric collections up or down by going to the settings and changing them there.  It would be ideal if in the future we added a resolution config in the YML file directly.  But for now, it is a universal setting:

Percona Monitoring and Management metric collections

In each directory you will find an example .yml file with a format like the following:

mysql_oss_demo: 
  query: "select count(1) as comment_cnt from movie_json_test.movies_normalized_user_comments;"
  metrics: 
    - comment_cnt: 
        usage: "GAUGE" 
        description: "count of the number of comments coming in"

Our error during the live stream was we forgot to include the database in our query (i.e. table_name.database_name), but there was a bug that prevented us from seeing the error in the log files.  There is no setting for the database in the YML, so take note.

This will create a metric named mysql_oss_demo_comment_cnt in whatever resolution you specify.  Each YML will execute separately with its own connection.  This is important to understand as if you deploy lots of custom queries you will see a steady number of connections (this is something you will want to consider if you are doing custom collections).  Alternatively, you can add queries and metrics to the same file, but they are executed sequentially.  If, however, the entire YML file can not be completed in less time than the defined resolution ( i.e. finished within five seconds for high resolution), then the data will not be stored, but the query will continue to run.  This can lead to a query pile-up if you are not careful.   For instance, the above query generally takes 1-2 seconds to return the count.  I placed this in the medium bucket.  As I added load to the system, the query time backed up.

You can see the slowdown.  You need to be careful here and choose the appropriate resolution.  Moving this over to the low resolution solved the issue for me.

That said, query response time is dynamic based on the conditions of your server.  Because these queries will run to completion (and in parallel if the run time is longer than the resolution time), you should consider limiting the query time in MySQL and PostgreSQL to prevent too many queries from piling up.

In MySQL you can use:

mysql>  select /*+ MAX_EXECUTION_TIME(4) */  count(1) as comment_cnt from movie_json_test.movies_normalized_user_comments ;
ERROR 1317 (70100): Query execution was interrupted

And on PostgreSQL you can use:

SET statement_timeout = '4s'; 
select count(1) as comment_cnt from movies_normalized_user_comments ;
ERROR:  canceling statement due to statement timeout

By forcing a timeout you can protect yourself.  That said, these are “errors” so you may see errors in the error log.

You can check the system logs (syslog or messages) for errors with your custom queries (note at this time as of PMM 2.0.21, errors were not making it into these logs because of a potential bug).  If the data is being collected and everything is set up correctly, head over to the default Grafana explorer or the “Advanced Data Exploration” dashboard in PMM.  Look for your metric and you should be able to see the data graphed out:

Advanced Data Exploration PMM

In the above screenshot, you will notice some pretty big gaps in the data (in green).  These gaps were caused by our query taking longer than the resolution bucket.  You can see when we moved to 60-second resolution (in orange), the graphs filled in.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Oct
13
2021
--

Migrating MongoDB to Kubernetes

Migrating MongoDB to Kubernetes

Migrating MongoDB to KubernetesThis blog post is the last in the series of articles on migrating databases to Kubernetes with Percona Operators. Two previous posts can be found here:

As you might have guessed already, this time we are going to cover the migration of MongoDB to Kubernetes. In the 1.10.0 release of Percona Distribution for MongoDB Operator, we have introduced a new feature (in tech preview) that enables users to execute such migrations through regular MongoDB replication capabilities. We have already shown before how it can be used to provide cross-regional disaster recovery for MongoDB, we encourage you to read it.

The Goal

There are two ways to migrate the database:

  1. Take the backup and restore it.
    – This option is the simplest one, but unfortunately comes with downtime. The bigger the database, the longer the recovery time is.
  2. Replicate the data to the new site and switch the application once replicas are in sync.
    – This allows the user to perform the migration and switch the application with either zero or little downtime.

This blog post is a walkthrough on how to migrate MongoDB replica set to Kubernetes with replication capabilities. 

MongoDB replica set to Kubernetes

  1. We have a MongoDB cluster somewhere (the Source). It can be on-prem or some virtual machine. For demo purposes, I’m going to use a standalone replica set node. The migration procedure of a replica set with multiple nodes or sharded cluster is almost identical.
  2. We have a Kubernetes cluster with Percona Operator (the Target). The operator deploys 3 standalone MongoDB nodes in unmanaged mode (we will talk about it below).
  3. Each node is exposed so that the nodes on the Source can reach them.
  4. We are going to replicate the data to Target nodes by adding them into the replica set.

As always all blog post scripts and configuration files are available publicly in this Github repository.

Prerequisites

  • MongoDB cluster – either on-prem or VM. It is important to be able to configure mongod to some extent and add external nodes to the replica set.
  • Kubernetes cluster for the Target.
  • kubectl to deploy and manage the Operator and database on Kubernetes.

Prepare the Source

This section explains what preparations must be made on the Source to set up the replication.

Expose

All nodes in the replica set must form a mesh and be able to reach each other. The communication between the nodes can go through the public internet or some VPN. For demonstration purposes, we are going to expose the Source to the public internet by editing mongod.conf:

net:
  bindIp: <PUBLIC IP>

If you have multiple replica sets – you need to expose all nodes of each of them, including config servers.

TLS

We take security seriously at Percona, and this is why by default our Operator deploys MongoDB clusters with encryption enabled. I have prepared a script that generates self-signed certificates and keys with the openssl tool. If you already have Certificate Authority (CA) in use in your organization, generate the certificates and sign them by your CA.

The list of alternative names can be found either in this document or in this openssl configuration file. Note DNS.20 entry:

DNS.20      = *.mongo.spron.in

I’m going to use this wildcard entry to set up the replication between the nodes. The script also generates an

ssl-secret.yaml

file, which we are going to use on the Target side.

You need to upload CA and certificate with a private key to every Source replica set node and then define it in the

mongod.conf

:

# network interfaces
net:
  ...
  tls:
    mode: preferTLS
    CAFile: /path/to/ca.pem
    certificateKeyFile: /path/to/mongod.pem

security:
  clusterAuthMode: x509
  authorization: enabled

Note that I also set

clusterAuthMode

to x509. It enforces the use of x509 authentication. Test it carefully on a non-production environment first as it might break your existing replication.

Create System Users

Our Operator needs system users to manage the cluster and perform health checks. Usernames and passwords for system users should be the same on the Source and the Target. This script is going to generate the

user-secret.yaml

to use on the Target and mongo shell code to add the users on the Source (it is an example, do not use it in production).

Connect to the primary node on the Source and execute mongo shell commands generated by the script.

Prepare the Target

Apply Users and TLS secrets

System users’ credentials and TLS certificates must be similar on both sides. The scripts we used above generate Secret object manifests to use on the Target. Apply them:

$ kubectl apply -f ssl-secret.yaml
$ kubectl apply -f user-secret.yam

Deploy the Operator and MongoDB Nodes

Please follow one of the installation guides to deploy the Operator. Usually, it is one step operation through

kubectl

:

$ kubectl apply -f operator.yaml

MongoDB nodes are deployed with a custom resource manifest – cr.yaml. There are the following important configuration items in it:

spec:
  unmanaged: true

This flag instructs Operator to deploy the nodes in unmanaged mode, meaning they are not configured to form the cluster. Also, the Operator does not generate TLS certificates and system users.

spec:
…
  updateStrategy: Never

Disable the Smart Update feature as the cluster is unmanaged.

spec:
…
  secrets:
    users: my-new-cluster-secrets
    ssl: my-custom-ssl
    sslInternal: my-custom-ssl-internal

This section points to the Secret objects that we created in the previous step.

spec:
…
  replsets:
  - name: rs0
    size: 3
    expose:
      enabled: true
      exposeType: LoadBalancer

Remember, that nodes need to be exposed and reachable. To do this we create a service per Pod. In our case, it is a

LoadBalancer

object, but it can be any other Service type.

spec:
...
  backup:
    enabled: false

If the cluster and nodes are unmanaged, the Operator should not be taking backups. 

Deploy unmanaged nodes with the following command:

$ kubectl apply -f cr.yaml

Once nodes are up and running, also check the services. We will need the IP-addresses of new replicas to add them later to the replica set on the Source.

$ kubectl get services
NAME                    TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)           AGE
…
my-new-cluster-rs0-0    LoadBalancer   10.3.252.134   35.223.104.224   27017:31332/TCP   2m11s
my-new-cluster- rs0-1   LoadBalancer   10.3.244.166   34.134.210.223   27017:32647/TCP   81s
my-new-cluster-rs0-2    LoadBalancer   10.3.248.119   34.135.233.58    27017:32534/TCP   45s

Configure Domains

X509 authentication is strict and requires that the certificate’s common name or alternative name match the domain name of the node. As you remember we had wildcard

*.mongo.spron.in

included in our certificate. It can be any domain that you use, but make sure a certificate is issued for this domain.

I’m going to create A-records to point to public IP-addresses of MongoDB nodes:

k8s-1.mongo.spron-in -> 35.223.104.224
k8s-2.mongo.spron.in -> 34.134.210.223
k8s-3.mongo.spron-in -> 34.135.233.58

Replicate the Data to the Target

It is time to add our nodes in the Kubernetes cluster to the replica set. Login into the mongo shell on the Source and execute the following:

rs.add({ host: "k8s-1.mongo.spron.in", priority: 0, votes: 0} )
rs.add({ host: "k8s-2.mongo.spron.in", priority: 0, votes: 0} )
rs.add({ host: "k8s-3.mongo.spron.in", priority: 0, votes: 0} )

If everything is done correctly these nodes are going to be added as secondaries. You can check the status with the

rs.status()

command.

Cutover

Check that newly added node are synchronized. The more data you have, the longer the synchronization process is going to take. To understand if nodes are synchronized you should compare the values of

optime

and

optimeDate

of the Primary node with the values for the Secondary node in

rs.status()

output:

{
        "_id" : 0,
        "name" : "147.182.213.59:27017",
        "stateStr" : "PRIMARY",
...
        "optime" : {
                "ts" : Timestamp(1633697030, 1),
                "t" : NumberLong(2)
        },
        "optimeDate" : ISODate("2021-10-08T12:43:50Z"),
...
},
{
        "_id" : 1,
        "name" : "k8s-1.mongo.spron.in:27017",
        "stateStr" : "SECONDARY",
...
        "optime" : {
                "ts" : Timestamp(1633697030, 1),
                "t" : NumberLong(2)
        },
        "optimeDurable" : {
                "ts" : Timestamp(1633697030, 1),
                "t" : NumberLong(2)
        },
        "optimeDate" : ISODate("2021-10-08T12:43:50Z"),
...
},

When nodes are synchronized, we are ready to perform the cutover. Please ensure that your application is configured properly to minimize downtime during the cutover.

The cutover is going to have two steps:

  1. One of the nodes on the Target becomes the primary.
  2. Operator starts managing the cluster and nodes on the Source are no longer present in the replica set.

Switch the Primary

Connect with mongo shell to the primary on the Source side and make one of the nodes on the Target primary. It can be done by changing the replica set configuration:

cfg = rs.config()
cfg.members[1].priority = 2
cfg.members[1].votes = 1
rs.reconfig(cfg)

We enable voting and set priority to two on one of the nodes in the Kubernetes cluster. Member id can be different for you, so please look carefully into the output of

rs.config()

command.

Start Managing the Cluster

Once the primary is running in Kubernetes, we are going to tell the Operator to start managing the cluster. Change

spec.unmanaged

to

false

 in the Custom Resource with patch command:

$ kubectl patch psmdb my-cluster-name --type=merge -p '{"spec":{"unmanaged": true}}'

You can also do this by changing

cr.yaml

and applying it. This is it, now you have the cluster in Kubernetes which is managed by the Operator. 

Conclusion

You truly start to appreciate Operators once you get used to them. When I was writing this blog post I found it extremely annoying to deploy and configure a single MongoDB node on a Linux box; I don’t want to think about the cluster. Operators abstract Kubernetes primitives and database configuration and provide you with a fully operational database service instead of a bunch of nodes. Migration of MongoDB to Kubernetes is a challenging task, but it is much simpler with Operator. And once you are on Kubernetes, Operator takes care of all day-2 operations as well.

We encourage you to try out our operator. See our GitHub repository and check out the documentation.

Found a bug or have a feature idea? Feel free to submit it in JIRA.

For general questions please raise the topic in the community forum

You are a developer and looking to contribute? Please read our CONTRIBUTING.md and send the Pull Request.

Percona Distribution for MongoDB Operator

The Percona Distribution for MongoDB Operator simplifies running Percona Server for MongoDB on Kubernetes and provides automation for day-1 and day-2 operations. It’s based on the Kubernetes API and enables highly available environments. Regardless of where it is used, the Operator creates a member that is identical to other members created with the same Operator. This provides an assured level of stability to easily build test environments or deploy a repeatable, consistent database environment that meets Percona expert-recommended best practices.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com