May
23
2019
--

Takeaways from KubeCon; the latest on Kubernetes and cloud native development

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller discuss major announcements that came out of the Linux Foundation’s European KubeCon/CloudNativeCon conference and discuss the future of Kubernetes and cloud-native technologies.

Nearly doubling in size year-over-year, this year’s KubeCon conference brought big news and big players, with major announcements coming from some of the world’s largest software vendors including Google, AWS, Microsoft, Red Hat, and more. Frederic and Ron discuss how the Kubernetes project grew to such significant scale and which new initiatives in cloud-native development show the most promise from both a developer and enterprise perspective.

“This ecosystem starts sprawling, and we’ve got everything from security companies to service mesh companies to storage companies. Everybody is here. The whole hall is full of them. Sometimes it’s hard to distinguish between them because there are so many competing start-ups at this point.

I’m pretty sure we’re going to see a consolidation in the next six months or so where some of the bigger players, maybe Oracle, maybe VMware, will start buying some of these smaller companies. And I’m sure the show floor will look quite different about a year from now. All the big guys are here because they’re all trying to figure out what’s next.”

Frederic and Ron also dive deeper into the startup ecosystem rapidly developing around Kubernetes and other cloud-native technologies and offer their take on what areas of opportunity may prove to be most promising for new startups and founders down the road.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Apr
16
2019
--

Google expands its container service with GKE Advanced

With its Kubernetes Engine (GKE), Google Cloud has long offered a managed service for running containers on its platform. Kubernetes users tend to have a variety of needs, but so far, Google only offered a single tier of GKE that wasn’t necessarily geared toward the high-end enterprise users the company is trying to woo. Today, however, the company announced a new advanced edition of GKE that introduces a number of new features and an enhanced financially backed SLA, additional security tools and new automation features. You can think of GKE Advanced as the enterprise version of GKE.

The new service will launch in the second quarter of the year and hasn’t yet announced pricing. The regular version of GKE is now called GKE Standard.

Google says the service builds upon the company’s own learnings from running a complex container infrastructure internally for years.

For enterprise customers, the financially backed SLA is surely a nice bonus. The promise here is 99.95 percent guaranteed availability for regional clusters.

Most users who opt for a managed Kubernetes environment do so because they don’t want to deal with the hassle of managing these clusters themselves. With GKE Standard, there’s still some work to be done with regard to scaling the clusters. Because of this, GKE Advanced includes a Vertical Pod Autoscaler that keeps on eye on resource utilization and adjusts it as necessary, as well as Node Auto Provisioning, an enhanced version of cluster autoscaling in GKE Standard.

In addition to these new GKE Advanced features, Google is adding GKE security features like the GKE Sandbox, which is currently in beta and will come exclusively to GKE Advanced once it’s launched, and the ability to enforce that only signed and verified images are used in the container environment.

The Sandbox uses Google’s gVisor container sandbox runtime. With this, every sandbox gets its own user-space kernel, adding an additional layer of security. With Binary Authorization, GKE Advanced users also can ensure that all container images are signed by a trusted authority before they are put into production. Somebody could theoretically still smuggle malicious code into the containers, but this process, which enforces standard container release practices, for example, should ensure that only authorized containers can run in the environment.

GKE Advanced also includes support for GKE usage metering, which allows companies to keep tabs on who is using a GKE cluster and charge them according. This feature, too, will be exclusive to GKE Advanced.

Nov
23
2018
--

BlueCargo optimizes stacks of containers for maximum efficiency

Meet BlueCargo, a logistics startup focused on seaport terminals. The company was part of Y Combinator’s latest batch and recently raised a $3 million funding round from 1984 Ventures, Green Bay Ventures, Sound Ventures, Kima Ventures and others.

If you picture a terminal, chances are you see huge piles of containers. But current sorting methods are not efficient at all. Yard cranes end up moving a ton of containers just to reach a container sitting at the bottom of the pile.

BlueCargo wants to optimize those movements by helping you store containers at the right spot. The first container that is going to leave the terminal is going to be at the top of the pile.

“Terminals spend a lot of time making unproductive or undesired movements,” co-founder and CEO Alexandra Griffon told me. “And yet, terminals only generate revenue every time they unload or load a container.”

Right now, ERP-like solutions only manage containers according to a handful of business rules that don’t take into account the timeline of a container. Empty containers are all stored in one area, containers with dangerous goods are in another area, etc.

The startup leverages as much data as possible on each container — where it’s coming from, the type of container, if it’s full or empty, the cargo ship that carried it, the time of the year and more.

Every time BlueCargo works with a new terminal, the startup collects past data and processes it to create a model. The team can then predict how BlueCargo can optimize the terminal.

“At Saint-Nazaire, we could save 22 percent on container shifting,” Griffon told me.

The company will test its solution in Saint-Nazaire in December. It integrates directly with existing ERP solutions. Cranes already scan container identification numbers. BlueCargo could then instantly push relevant information to crane operators so that they know where to put down a container.

Saint-Nazaire is a relatively small port compared to the biggest European ports. But the company is already talking with terminals in Long Beach, one of the largest container ports in the U.S.

BlueCargo also knows that it needs to tread carefully — many companies already promised magical IT solutions in the past. But it hasn’t changed much in seaports.

That’s why the startup wants to be as seamless as possible. It only charges fees based on shifting savings — 30 percent of what it would have cost you with the old model. And it doesn’t want to alter workflows for people working at terminals — it’s like an invisible crane that helps you work faster.

There are six dominant players managing terminals around the world. If BlueCargo can convince those companies to work with the startup, it would represent a good business opportunity.

Jun
14
2018
--

Percona Monitoring and Management: Look After Your pmm-data Container

looking after pmm-datamcontainers

looking after pmm-datamcontainersIf you have already deployed PMM server using Docker you might be aware that we begin by creating a special container for persistent PMM data. In this post, I aim to explain the importance of pmm-data container when you deploy PMM server with Docker. By the end of this post, you will have a fair idea of why this Docker container is needed.

Percona Monitoring and Management (PMM) is a free and open-source solution for database troubleshooting and performance optimization that you can run in your own environment. It provides time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

What is the purpose of pmm-data?

Well, as simple as its name suggests, when PMM Server runs via Docker its data is stored in the pmm-data container. It’s a dedicated data only container which you create with bind mounts using -v i.e data volumes for holding persistent PMM data. We use pmm-data to compartmentalize the persistent data so you can more easily backup up and move data consistently across instances or containers. It acts as a single access point from which other running containers (in this case pmm-server) can access data volumes.

pmm-data container does not run, but data from the container is used by pmm-server to build graphs. PMM Server is the core of PMM that aggregates collected data and presents it in the form of tables, dashboards, and graphs in a web interface.

Why do we use docker create ?

The

docker create

  command instructs the Docker daemon to create a writable container layer over the docker image. When you execute

docker create

  using the steps shown, it will create a Docker container named pmm-data and initialize data volumes using the -v flag in conjunction with the create command. (e.g. /opt/prometheus/data).

Option -v is used multiple times in current versions of PMM to mount multiple data volumes. This allows you to create the data volume containers, and then use them from another container i.e pmm-server. We do not want to run the pmm-data container, but only to create it. nb: the number of data volumes bind mounted may change with versions of PMM

$ docker create \
   -v /opt/prometheus/data
   -v /opt/consul-data \
   -v /var/lib/mysql \
   -v /var/lib/grafana \
   --name pmm-data \
   percona/pmm-server:latest /bin/true

Make sure that the data volumes you initialize with the -v option match those given in the example. PMM Server expects you to have bind mounted those directories exactly as demonstrated in the deployment steps. For using different mount points for PMM deployment, please refer to this blog post. Data volumes are very useful as once designated and created you can share them and be include them as part of other containers. If you use -v or –volume to bind-mount a file or directory that does not yet exist on the Docker host, -v creates the endpoint for you. It is always created as a directory. Data in the pmm-data volume are actually hosted on the host’s filesystem.

Why does pmm-data not run ?

As we used

docker create

  container and not

docker run

  for pmm-data, this container does not run. It simply exists to make sure you retain all PMM data when you upgrade to a newer PMM Server image. Data volumes bind mounted on pmm-data container are shared to the running pmm-server container as the

--volumes-from

  option is used for pmm-server launch. Here we persisted data using Docker without binding it to the pmm-server by storing files in the host machine. As long as pmm-data exists, the data exists.

You can stop, destroy, or replace a container. When a non-running container is using a volume, the volume is still available to Docker and is not removed automatically. You can easily replace the pmm-server of the running container by a newer version without any impact or loss of data. For that reason, because of the need to store persistent data, we do it in a data volume. In our case, pmm-data container does not write to the same volumes as it could cause possible corruption.

Why can’t I remove pmm-data container ? What happens if I delete it ?

Removing pmm-data container results in the loss of collected metrics data.

If you remove containers that mount volumes, including the initial pmm-server container, or any subsequent containers mounted, such as pmm-server-2, you do not delete the volumes. This allows you to upgrade — or effectively migrate — data volumes between containers. Your data container might be based on an old version of container, with known security problems. It is not a big problem since it doesn’t actually run anything, but it doesn’t feel right.

As noted earlier, pmm-data stores metrics data as per the retention. You should not remove or recreate pmm-data container unless you need to wipe out all PMM data and start again. To delete the volume from disk, you must explicitly call docker rm -v against the container with a reference to the volume.

Some do’s and don’ts

  • Allocate enough disk space on the host for pmm-data to retain data.
    By default, Prometheus stores time-series data for 30 days, and QAN stores query data for 8 days.
  • Manage data retention appropriately as per your disk space available.
    You can take backup of pmm-data by extracting data from container to avoid data-loss in any situation by using steps mentioned here.

In case of any issues with metrics, here’s a good blog post regarding troubleshooting.

The post Percona Monitoring and Management: Look After Your pmm-data Container appeared first on Percona Database Performance Blog.

May
09
2016
--

Rancher Labs raises $20M Series B round for its container management platform

rancher_labs Rancher Labs, a container management platform that supports both Kubernetes and Docker Swarm, today announced that it has raised a $20 million Series B round. This new funding round was led by new investor GRC SinoGreen, a Chinese private equity and venture capital fund, with participation from existing investors Mayfield and Nexus Venture Partners. This round brings the company’s… Read More

Mar
30
2016
--

Docker MySQL Replication 101

Docker

Precona Server DockerIn this blog post, we’ll discuss some of the basics regarding Docker MySQL replication. Docker has gained widespread popularity in recent years as a lightweight alternative to virtualization. It is ideal for building virtual development and testing environments. The solution is flexible and seamlessly integrates with popular CI tools.

 

This post walks through the setup of MySQL replication with Docker using Percona Server 5.6 images. To keep things simple we’ll configure a pair of instances and override only the most important variables for replication. You can add whatever other variables you want to override in the configuration files for each instance.

Note: the configuration described here is suitable for development or testing. We’ve also used the operating system repository packages; for the latest version use the official Docker images. The steps described can be used to setup more slaves if required, as long as each slave has a different server-id.

First, install Docker and pull the Percona images (this will take some time and is only executed once):

# Docker install for Debian / Ubuntu
apt-get install docker.io
# Docker install for Red Hat / CentOS (requires EPEL repo)
yum install epel-release # If not installed already
yum install docker-io
# Pull docker repos
docker pull percona

Now create locally persisted directories for the:

  1. Instance configuration
  2. Data files
# Create local data directories
mkdir -p /opt/Docker/masterdb/data /opt/Docker/slavedb/data
# Create local my.cnf directories
mkdir -p /opt/Docker/masterdb/cnf /opt/Docker/slavedb/cnf
### Create configuration files for master and slave
vi /opt/Docker/masterdb/cnf/config-file.cnf
# Config Settings:
[mysqld]
server-id=1
binlog_format=ROW
log-bin
vi /opt/Docker/slavedb/cnf/config-file.cnf
# Config Settings:
[mysqld]
server-id=2

Great, now we’re ready start our instances and configure replication. Launch the master node, configure the replication user and get the initial replication co-ordinates:

# Launch master instance
docker run --name masterdb -v /opt/Docker/masterdb/cnf:/etc/mysql/conf.d -v /opt/Docker/masterdb/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=mysecretpass -d percona:5.6
00a0231fb689d27afad2753e4350192bebc19ab4ff733c07da9c20ca4169759e
# Create replication user
docker exec -ti masterdb 'mysql' -uroot -pmysecretpass -vvv -e"GRANT REPLICATION SLAVE ON *.* TO repl@'%' IDENTIFIED BY 'slavepass'G"
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
GRANT REPLICATION SLAVE ON *.* TO repl@"%"
--------------
Query OK, 0 rows affected (0.02 sec)
Bye
### Get master status
docker exec -ti masterdb 'mysql' -uroot -pmysecretpass -e"SHOW MASTER STATUSG"
mysql: [Warning] Using a password on the command line interface can be insecure.
*************************** 1. row ***************************
             File: mysqld-bin.000004
         Position: 310
     Binlog_Do_DB:
 Binlog_Ignore_DB:
Executed_Gtid_Set:

If you look carefully at the “docker run” command for masterdb, you’ll notice we’ve defined two paths to share from local storage:

/opt/Docker/masterdb/data:/var/lib/mysql

  • This maps the local “/opt/Docker/masterdb/data” to the masterdb’s container’s “/var/lib/mysql path”
  • All files within the datadir “/var/lib/mysql” persist locally on the host running docker rather than in the container
/opt/Docker/masterdb/cnf:/etc/mysql/conf.d

  • This maps the local “/opt/Docker/masterdb/cnf” directory to the container’s “/etc/mysql/conf.d” path
  • The configuration files for the masterdb instance persist locally as well
  • Remember these files augment or override the file in “/etc/mysql/my.cnf” within the container (i.e., defaults will be used for all other variables)

We’re done setting up the master, so let’s continue with the slave instance. For this instance the “docker run” command also includes the “–link masterdb:mysql” command, which links the slave instance to the master instance for replication.

After starting the instance, set the replication co-ordinates captured in the previous step:

docker run --name slavedb -d -v /opt/Docker/slavedb/cnf:/etc/mysql/conf.d -v /opt/Docker/slavedb/data:/var/lib/mysql --link masterdb:mysql -e MYSQL_ROOT_PASSWORD=mysecretpass -d percona:5.6
eb7141121300c104ccee0b2df018e33d4f7f10bf5d98445ed4a54e1316f41891
docker exec -ti slavedb 'mysql' -uroot -pmysecretpass -e'change master to master_host="mysql",master_user="repl",master_password="slavepass",master_log_file="mysqld-bin.000004",master_log_pos=310;"' -vvv
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
change master to master_host="mysql",master_user="repl",master_password="slavepass",master_log_file="mysqld-bin.000004",master_log_pos=310
--------------
Query OK, 0 rows affected, 2 warnings (0.23 sec)
Bye

Almost ready to go! The last step is to start replication and verify that replication running:

# Start replication
docker exec -ti slavedb 'mysql' -uroot -pmysecretpass -e"START SLAVE;" -vvv
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
START SLAVE
--------------
Query OK, 0 rows affected, 1 warning (0.00 sec)
Bye
# Verify replication is running OK
docker exec -ti slavedb 'mysql' -uroot -pmysecretpass -e"SHOW SLAVE STATUSG" -vvv
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
SHOW SLAVE STATUS
--------------
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: mysql
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000004
          Read_Master_Log_Pos: 310
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 284
        Relay_Master_Log_File: mysqld-bin.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 310
              Relay_Log_Space: 458
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 230d005a-f1a6-11e5-b546-0242ac110004
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
1 row in set (0.00 sec)
Bye

Finally, we have a pair of dockerized Percona Server 5.6 master-slave servers replicating!

As mentioned before, this is suitable for a development or testing environment. Before going into production with this configuration, think carefully about the tuning of the “my.cnf” variables and the choice of disks used for the data/binlog directories. It is important to remember that newer versions of Docker recommend using “networks” rather than “linking” for communication between containers.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com