Oct
28
2010
--

Percona Server 5.1.51-rel11.5

Percona Community,

Percona Server version 5.1.51-rel11.5 is now available for download.

The main purpose of this release is to update the current Percona stable release to the latest version of MySQL 5.1.

Functionality Added or Changed

  •  Percona Server 5.1.51-rel11.5 is now based on MySQL 5.1.51.
  •  New Features Added: None
  •  Other Changes: None

Bugs Fixed

  •  Bug #661354 – Fixed a problem compiling query_cache_with comments for 5.1.51-rel11.5. (Oleg Tsarev)
  •  Bug #661844 – Fixed a problem with server variables failing test for 5.1.51-rel11.5. (Oleg Tsarev)

The Release Notes for this and previous releases can be found in our Wiki

The binary packages are available on our website.
The latest source code for Percona Server, including the development branch, can be found on LaunchPAD.

Please report any bugs found at Bugs in Percona Server.
For general questions, use our Pecona-discussions group, and for development questions our Percona-dev group.
For support, commercial, and sponsorship inquiries, contact Percona.


Entry posted by Fred Linhoss |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Oct
27
2010
--

MySQL Limitations Part 4: One thread per connection

This is the third in a series on what’s seriously limiting MySQL in core use cases (links: part 1, 2, 3). This post is about the way MySQL handles connections, allocating one thread per connection to the server.

MySQL is a single process with multiple threads. Not all databases are architected this way; some have multiple processes that communicate through shared memory or other means. It’s cheap to create a connection to MySQL, because it just requires creating a thread (or taking one from a cache). This is generally so fast that there isn’t really the need for connection pools as there is with other databases, at least not in the same way. Windows in particular has had excellent threading support practically forever; Linux has very good threading now, but that wasn’t always the case.

However, many development environments and programming languages really want a connection pool. They’re just built that way (I’m looking at you, Java). And many others use persistent connections by default, so that a connection isn’t really closed when it’s closed; it’s kind of like a connection pool, except that the connection is persisted from request to request within the same process, rather than being shared with whichever request needs a connection.

Connection pools and persistent connections combined with a large number of application servers can lead to a situation where the database server has a very large number of connections open to it, most of which are doing nothing. It’s not uncommon for me to see a server with 1000 to 5000 connections open, and maybe one to three are actually running queries on average. These connections originate from dozens to hundreds of application server instances. When you have a heavily sharded or otherwise horizontally scaled application, it’s not only easy to get into this pickle, it’s really hard or impossible to avoid it.

And with 5000 connections open, you get 5000 threads in the server. That increases the overhead from thread scheduling, and potentially memory usage as well. I feel like I’m forgetting some reasons that this matters — please fill in whatever’s missing in the comments.

There can be more than one solution to this problem, but the one that’s actually partially implemented is a pool of threads, which was originally coded for MySQL 6.0, but is available now in MariaDB.

Unfortunately it isn’t a full solution, because it can cause undesirable lock-out or waiting, and the specific implementation has a scalability bottleneck on multicore servers. Mark Callaghan has done much more investigation of the pool of threads than I have. There are more details in this blog post by Mark, and two followup blog posts from Tim Cook (1, 2).

Thanks for the great comments on the last post. Some of them were good guesses. Remember that the context for this series isn’t micro-limitations or edge-case badness (even if they are serious in some cases), but rather a focus on shortcomings in the main use cases for the server. There are a lot of things MySQL doesn’t do well, but it doesn’t matter that much, because that’s not what it’s designed for. Wrong tool, wrong use, NotABug. I’m thinking of the lack of sort-merge joins or intra-query parallelism, for example. It would be lovely to have those things, if you’re running a data warehouse on MySQL, and in some cases for other uses too (note that most databases that do have these query plans usually try to use nested-loop joins whenever possible, because of things like the lower startup cost for the query). But MySQL isn’t a data warehouse DBMS first and foremost. It’s a general-purpose OLTP database server that runs well on affordable hardware and is great for Web usage. It’s so good, in fact, that it can be used for tons of other things such as… data warehousing. But it isn’t a Netezza or Paraccel, and if it were, it wouldn’t be a great OLTP web database too.

MySQL replication is one of the core, fundamental features — and it’s single-threaded and relies on the binary log, which are two major limitations. And it has subqueries, which are a core, fundamental part of SQL — but it’s bad at certain kinds of them. That’s why I listed those as major limitations. And because MySQL is a multi-threaded database for Web usage that tends to be used in sharded environments with tons of application servers, which creates a situation with many thousands of connections to the database, and because it doesn’t handle that very well, I list its one-thread-per-connection design as a serious limitation.


Entry posted by Baron Schwartz |
13 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Oct
26
2010
--

Sharing an auto_increment value across multiple MySQL tables (revisited)

A couple of weeks ago I blogged about Sharing an auto_increment value across multiple MySQL tables. In the comments, a few people wrote in to suggest alternative ways of implementing this.  I just got around to benchmarking those alternatives today across two large EC2 machines:


(Measured in transactions/second – higher is better)

What is the conclusion?  With the exception of my original option2, they actually all perform fairly similar.  The Flickr and Option1 tests perform marginally better.  Test “arjen2″ is option2, but with a MyISAM table — it suffers a little because EC2 can be a little high for latency, and there’s one additional round trip.  Test arjen2005 is not too dissimilar from the Flickr solution, but uses a MySQL stored function.

Full Disclosure.


Entry posted by Morgan Tocker |
7 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Oct
25
2010
--

MySQL Limitations Part 3: Subqueries

This is the third in a series on what’s seriously limiting MySQL in certain circumstances (links: part 1, 2). This post is about subqueries, which in some cases execute outside-in instead of inside-out as users expect.

It’s easy to pick on subqueries in MySQL, so I’ll try to be gentle. The following query will surprise users unpleasantly:

select * from a where a.id in (select id from b);

Users expect the inner query to execute first, then the results to be substituted into the IN() list. But what happens instead is usually a full scan or index scan of table a, followed by N queries to table b. This is because MySQL rewrites the query to make the inner query dependent on the outer query, which could be an optimization in some cases, but de-optimizes the query in many other cases. NOT IN(SELECT …) queries execute badly, too. (Note: putting a literal list of items in the IN() clause performs fine. It’s only when there is a SELECT inside it that it works poorly.)

The fix for this has been in progress for a few years, and Sergey Petrunia committed working code to the stalled 6.0 release. But it’s not quite clear whether that code was a complete solution. It has not been in any GA or RC release, so it hasn’t been used widely.

To be fair, many other database servers also have poor subquery performance, or have had it in the past and have fixed it. And many MySQL users have learned to simply write JOINs instead, so it isn’t that much of a limitation. But it would be a big improvement if it were fixed.

See if you can guess what limitation number 4 will be!


Entry posted by Baron Schwartz |
17 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Oct
25
2010
--

Impact of the sort buffer size in MySQL

The parameter sort_buffer_size is one the MySQL parameters that is far from obvious to adjust. It is a per session buffer that is allocated every time it is needed. The problem with the sort buffer comes from the way Linux allocates memory. Monty Taylor (here) have described the underlying issue in detail, but basically above 256kB the behavior changes and becomes slower. After reading a post from Ronald Bradford (here), I decide to verify and benchmark performance while varying the size of the sort_buffer. It is my understanding that the sort_buffer is used when no index are available to help the sorting so I created a MyISAM table with one char column without an index:

CODE:

  1. CREATE TABLE `sorttest` (
  2.   `data` char(30) DEFAULT NULL
  3. ) ENGINE=MyISAM DEFAULT CHARSET=latin1

and I inserted 100k rows with this simple script:

CODE:

  1. #!/bin/bash
  2.  
  3. NUMROW=100000
  4. COUNT=0
  5. while [ “$NUMROW” -gt “$COUNT” ]
  6. do
  7.     UUID=`uuidgen`
  8.     mysql test -e “insert into sorttest value (‘$UUID’);”
  9.     let “COUNT=COUNT+1”
  10. done

I know, I could have used the uuid() function of MySQL. For the benchmark, I used an old PII 350 MHz computer, I think for such CPU bound benchmarks, an old computer is better, if there are small differences, they’ll be easier to observe. I varied the sort_buffer_size by steps of 32 KB and I recorded the time required to perform 12 queries like ‘select * from sorttest order by data limit 78000,1’ with, of course, the query cache disabled. I also verified that during the whole process, the computer never swapped and I pre-warmed the file cache before the benchmark by doing “alter table sorttest engine=myisam;”. The script used for the benchmark is the following:

CODE:

  1. #!/bin/bash
  2.  
  3. for i in `seq 1 1000`
  4. do
  5.         START=`date +%s.%N`
  6.         OUT=`mysql -e “set session sort_buffer_size=32*1024*$i;select * from sorttest order by data limit 78000,1;show session status like ‘Sort_merge_passes’;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;select * from sorttest order by data limit 78000,1;” test`
  7.         END=`date +%s.%N`
  8.         MERGE=`echo $OUT | cut -d‘ ‘ -f6`
  9.         TIME=`echo “$END – $START” | bc`
  10.         echo “$i $MERGE $TIME”
  11. done

which in addition to output the total time, output the number of Sort_merge_passes, which will be useful to interpret the results. The figure below shows a graphical representation of the results.

The first we can notice by looking at the graph is that the expected correspondence between the time for the queries and the number of sort merge passes. For the small values of the sort buffer size, below 440KB, there are many sort merge passes and the time for the queries hover around 18s. Above 440K, as the sort merge passes drops to 1, there is a large drop of the time for the queries below 14s. Then, as the sort buffer size is further risen, the performance gain is negative up to the point, around 6.4MB where no more sort merge pass are required and then, the time for the queries loses all dependency over the sort buffer size. I am still trying to figure out why the number of sort merge passes felt to zero at 6.4MB since the total size of the table is less than 3MB. That seems to be a pretty high overhead per row (~37 bytes) but if the structure has a few pointers, that can add up to such amount of bytes pretty quickly.

The important point here, at least for the Linux, glibc and MySQL versions I used and for the test I did, there doesn’t seem to be an observable negative impact of the glibc memory allocation threshold at 256KB. I’ll try to find ways to repeat this little experiment with the other per session buffers just to confirm the findings.

OS: Ubuntu 10.04 LTS, Linux test1 2.6.32-21-generic-pae #32-Ubuntu SMP Fri Apr 16 09:39:35 UTC 2010 i686 GNU/Linux
MySQL: 5.1.41-3ubuntu12.6-log

P.S.: Gnumerics for graphs is so much better than OpenOffice Calc.


Entry posted by Yves Trudeau |
7 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Oct
23
2010
--

MySQL Limitations Part 2: The Binary Log

This is the second in a series on what’s seriously limiting MySQL in certain circumstances (links: part 1). In the first part, I wrote about single-threaded replication. Upstream from the replicas is the primary, which enables replication by writing a so-called “binary log” of events that modify data in the server. The binary log is a real limitation in MySQL.

The binary log is necessary not only for replication, but for point-in-time recovery, too. Given a backup and the corresponding binary log position, you can replay the binary log and roll forward the state of your server to a desired point in time.

But enabling the binary log reduces MySQL’s performance dramatically. It is not the logging itself that’s the problem — writing the log is usually not much additional work. It’s ensuring consistency and durability that is expensive. Flushing it to disk adds an fsync call for every transaction. And the server performs an XA transaction between InnoDB and the binary log. This adds more fsync calls, and causes mutex contention, and prevents group commit, and probably other things that aren’t coming to mind now.

The performance reduction can be an order of magnitude or more.

What’s the solution? I’m not sure I can summarize it concisely. There is a lot of complexity, and honestly I don’t understand some of the server internals fully enough to have a 50-thousand-foot view of it all. The binary logging and replication code, and its interaction with InnoDB, is difficult to understand. Kristian Nielsen has an extensive series of posts on group commit alone.

I think that a full fix might require significant architectural changes to MySQL. This will be hard. Maybe Drizzle is going in a good direction — time will tell. All of the solutions that I can think of are too simplistic. For example, doing replication through the InnoDB transaction log would work fine if a) all the data were in InnoDB, and b) InnoDB’s data didn’t have to be synchronized with the .frm files (and Drizzle has gotten rid of the .frm files, hooray), and c) privileges and other changes to the non-InnoDB data in MySQL were handled manually.

It could work if you just made sure that you didn’t change privileges or schema, but that’s a description of a pretty limited, clunky replication system from the user’s point of view. Still, I have considered it. There would need to be a mechanism of transporting the log files, and InnoDB would have to be put into a state of constant “recovery,” and it would have to be modified to be available read-only in this state so that it could be used for read queries. This can be done, of course. It’s just a matter of how hard it is.

It’s worth noting that PBXT does replication through its transaction logs, so there’s even precedent for this among MySQL storage engines. And there is Galera’s multi-master synchronization technology to look at, too.


Entry posted by Baron Schwartz |
11 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Oct
22
2010
--

High availability for MySQL on Amazon EC2 – Part 5 – The instance monitoring script

This post is the fifth of a series that started here.

From the previous posts of this series, we now have an instance restart script that can restart the database node in case of failure and automatically reconfigure Pacemaker and the other servers that needs to access the MySQL server. What we will cover in this post is the monitoring script that run on the MySQL node.

At its smallest expression, the instance monitoring script is a simple empty loop that will run forever like:

CODE:

  1. #!/bin/bash
  2.  
  3. while [ 1 ]
  4. do
  5.     sleep 60
  6. done

Although fully functional, this monitoring script is rather weak could be not working and the cluster would unaware of it. Very complex monitoring scripts can be written but let’s provide a basic functional one that monitor MySQL with the mysqladmin ping command.

CODE:

  1. #!/bin/sh
  2.  
  3. # MySQL basedir passed as argument
  4. # mysqladmin must be under $BASEDIR/bin
  5. # sock file must be under $BASEDIR/mysqld.sock
  6. BASEDIR=/usr
  7. MYSQLUSER=root
  8. MYSQLPASS=root
  9. SOCKET=/var/run/mysqld/mysqld.sock
  10. # initial sleep to give time to MySQL to recover InnoDB
  11. /bin/sleep 60
  12.  
  13. while [ 1 ]; do
  14.         sleep 60
  15.         STATUS=`$BASEDIR/bin/mysqladmin -S $SOCKET -u$MYSQLUSER -p$MYSQLPASS ping|grep -c alive`
  16.         if [ $STATUS -ne 1 ]; then
  17.                 # uname -n | /bin/mail -s “MySQL database down, rechecking in 5 seconds” $EMAIL
  18.                 sleep 5
  19.                 STATUS=`$BASEDIR/bin/mysqladmin -S $SOCKET -u$MYSQLUSER -p$MYSQLPASS ping|grep -c alive`
  20.                 if [ $STATUS -ne 1 ]; then
  21.                         # uname -n | /bin/mail -s ” MySQL database down, forcing failover” $EMAIL
  22.                         /etc/init.d/heartbeat stop
  23.                         exit
  24.                 fi
  25.         fi
  26. done

For the ones that knows Pacemaker a bit, they could wonder why I didn’t use crm_resource to migrate the MySQL resource to the other node. The problem with this approach is that the migration with crm_resource is achieved by setting the host affinity for the resource to INFINITY for the Monitor node. When the resource is moved, the MySQL node is restarted and its local copy of the cluster configuration is lost. After restart, the copy of the cluster configuration on the Monitor node will be pulled back by the MySQL node and since this copy has INFINITY as affinity of the resource to stay on the monitor host, the resource will stay there. If the resource is not move away quickly (within 5 min) from the Monitor node after the restart of the MySQL node, the instance restart script on the Monitor node will loop and restart again the MySQL node. Stopping the heartbeat service achieve the desired result without polluting the cluster configuration.

The following post in this series will add some details as how the IP of the MySQL server is broadcast to the client nodes.


Entry posted by Yves Trudeau |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Oct
21
2010
--

Percona Server with XtraDB Case Study, Behind the Scenes

We’ve published our first case study. The customer, ideeli, had a database that was struggling on standard MySQL and InnoDB. The big win was the upgrade to XtraDB. The business continued to grow quickly, and months later under much more traffic, the database is still outperforming their previous version.

I thought I’d write a few notes that didn’t seem appropriate to include in the case study, because this was a fun project that might be interesting to readers.

As usual, it was all about diagnosing the problem correctly. I used a variety of tools to help with this, foremost among them “stalk” and “collect” from Aspersa. There were several problems, not just one, and they required different techniques to diagnose. This can be hard when the problems are happening sporadically and/or mixed together. You really need to be disciplined and collect data, data, data. If you are not sure about the cause of something, you don’t have the right data. Maybe you have too little, or too much, or you have the signal mixed in with the noise. Knowing when and how to get and interpret good diagnostic data is easily 95% or 98% of the work in a case like this. All I had to do was wait until the problem happened, look at the diagnostics, and a couple minutes later I had my answer.

What were the problems? The query cache was causing both mutex contention and excessive CPU usage, for different reasons, and I found different problems in different samples. InnoDB was also dying under mutex contention. Each spike of slow queries I found was caused by different things. Sometimes GDB stack traces showed InnoDB mutex contention, sometimes oprofile showed the query cache hogging the CPU, and so on. So we had to solve all the problems, not just some of them.

The graphs of query traffic and response times were from data I gathered with tcprstat. I also used the data from tcprstat to analyze the variation in query response time. One-second intervals is a relatively fine granularity, but at that level you can see better when micro-freezes are occurring. I used ad-hoc slow-query-log analysis with awk and other tools to discover and investigate unusual patterns, and figure out whether queries were causes or victims of performance problems. The problems here were not caused by queries, but query behavior was the symptom that we could observe, so all of the above analysis was useful for detecting the problem as it happened, and verifying that it was not still happening after we implemented fixes.

New Relic was a very helpful tool in this case, too. If you don’t use New Relic, you might try it. (We don’t get paid to say that.) Their tools are really nice.

I also want to mention that this database’s problems were entirely inside the database software itself. The ideeli team had already done a great job with indexing, query optimization, and so forth. Nothing more could be done without fixing these problems inside MySQL and InnoDB at the source code level, or changing the application architecture.

All things considered, the database server’s performance is not as high as many I’ve worked on, so the absolute numbers of queries per second may not look impressive. However, remember that this database is running on an EC2 server. EC2 has relatively slow virtual CPUs, and given that and the workload this server is under, it does very well. Of course you could far exceed that performance on a “real” server.

This case illustrates why average-case performance isn’t a good metric. As Peter says, “the average temperature of patients in the hospital isn’t important.” Good performance means avoiding outliers and variations. The query performance needs to be a) fast, and b) fast all the time, and c) the same kind of fast all the time. Variations in performance at one layer introduce cascading variations at each higher layer. Rarely is a stuttering component’s misbehavior absorbed by other layers. Instead, every layer above it gets into trouble.

That, among other things, is why the database has a much harder job than people sometimes realize, and why it’s so hard to write a good database server.


Entry posted by Baron Schwartz |
13 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Oct
21
2010
--

Puppet Camp Report: Two very different days

I attended Puppet Camp in San Francisco this month, thanks to my benevolent employer Canonical’s sponsorship of the event.

It was quite an interesting ride. I’d consider myself an intermediate level puppet user, having only edited existing puppet configurations and used it for proof of concept work, not actual giant deployments. I went in large part to get in touch with users and potential users of Ubuntu Server to see what they think of it now, and what they want out of it in the future. Also Puppet is a really interesting technology that I think will be a key part of this march into the cloud that we’ve all begun.

The state of Puppet

This talk was given by Luke, and was a very frank discussion of where puppet is and where it should be going. He discussed in brief where puppet labs fit in to this discussion as well. In brief, puppet is stable and growing. Upon taking a survey of puppet users, the overwhelming majority are sysadmins, which is no surprise. Debian and Ubuntu have equal share amongst survey respondants, but RHEL and CentOS dominate the playing field.

As for the future, there were a couple of things mentioned. Puppet needs some kind of messaging infrasturcture, and it seems the mCollective will be it. They’re not ready to announce anything, but it seems like a logical choice.  There are also plans for centralized data services to make the data puppet has available to it available to other things.

mCollective

Given by mCollective’s author, whose name escapes me, this was a live demo of what mCollective can do for you. Its basically a highly scalable messaging framework that is not necessarily tied to puppet. You simply need to write an agent that will subscribe to your messages. Currently only ActiveMQ is supported, but it uses STOMP, so any queueing system that uses STOMP should be able to utilize the same driver.

Once you have these agents consuming messages, one must just become creative at what they can do. He currently has some puppet focused agents and client code to pull data out of puppet and act accordingly. Ultimately, you could do much of this with something like Capistrano and parallel ssh, but this seems to scale well. One audience member boasted that they have over 1000 nodes using mCollective to perform tasks.

The Un-Conference

Puppet Camp took the form of an “un conference”, where there were just a few talks, and a bunch of sessions based on what people wanted to talk about. I didn’t propose anything, as I did not come with an agenda, but I definitely was interested in a few of the topics:

Puppet CA

My colleague at Canonical, Mathias Gug, proposed a discussion of the puppet CA mechanics, and it definitely interested me. Puppet uses the PKI system to verify clients and servers. The default mode of operation is for a new client to contact the configured puppet master, and submit a “CSR” or “Certificate Signing Request” to it. The puppet master administrator then verifies that the CSR is from one of their hosts, and signs it, allowing both sides to communicate with some degree of certainty that the certificates are valid.

Well there’s another option, which is just “autosign”. This works great on a LAN where access is highly guarded, as it no longer requires you to verify that your machine submitted the CSR. However, if you have any doubts about your network security, this is dangerous. An attacker can use this access to download all of your configuration information, which could contain password hashes, hidden hostnames, and any number of other things that you probably don’t want to share.

When you add the cloud to this mix, its even more important that you not just trust any host. IaaS cloud instances come and go all the time, with different hostnames/IP’s and properties. Mathias had actually proposed an enhancement to puppet to add a unique ID attribute for CSR’s made in the cloud, but there was a problem with the ruby OpenSSL library that wouldn’t allow these attributes to be added to the certificate. We discussed possibly generating the certificate beforehand using the openssl binary, but this doesn’t look like it will work w/o code changes to Puppet. I am not sure where we’ll go from there.

Puppet Instrumentation

I’m always interested to see what people are doing to measure their success. I think a lot of times we throw up whatever graph or alert monitoring is pre-packaged with something, and figure we’ve done our part. There wasn’t a real consensus on what were the important things to measure. As usual, sysadmins who are running puppet are pressed for time, and often measurement of their own processes falls by the way side with the pressure to measure everybody else.

Other stuff

There were a number of other sessions and discussions, but none that really jumped out at me. On the second day, an employee from Google’s IT department gave a talk about google’s massive puppet infrastructure. He discussed that it is only used for IT support, not production systems, though he wasn’t able to go into much more detail. Also Twitter gave some info about how they use puppet for their production servers, and there was an interesting discussion about the line between code and infrastructure deployment. This stemmed from a question I asked about why they didn’t use their awesome bittorent based “murder” code distribution system to deploy puppet rules. The end of that was “because murder is for code, and this is infrastructure”.

Cloud10/Awstrial

So this was actually the coolest part of the trip. Early on the second day, during the announcements, the (sometimes hilarious) MC Deepak mentioned that there would be a beginner puppet session later in the day. He asked that attendees to that session try to have a machine ready, so that the prsenter, Dan Bode, could give them some examples to try out.

Some guys on the Canonical server team had been working on a project called “Cloud 10” for the release of Ubuntu 10.10, which was coming in just a couple of days. They had thrown together a django app called awstrial that could be used to fire up EC2 or UEC images for free, for a limited period. The reason for this was to allow people to try Ubuntu Server 10.10 out for an hour on EC2. I immediately wondered though.. “Maybe we could just provide the puppet beginner class with instances to try out!”

Huzzah! I mentioned this to Mathias, and he and I started bugging our team members about getting this setup. That was at 9:00am. By noon, 3 hours later, the app had been installed on a fresh EC2 instance, a DNS pointer had been created pointing to said instance, and the whole thing had been tweaked to reference puppet camp and allow the users to have 3 hours instead of 55 minutes.

As lunch began, Mathias announced that users could go to “puppet.ec42.net” in a browser and use their Launchpad or Ubuntu SSO credentials to spawn an instance.

A while later, when the beginner class started, 25 users had signed on and started instances. Unfortunately, the instances died after 55 minutes due to a bug in the code, but ultimately, the users were able to poke around with these instances and try out stuff Dan was suggesting. This made Canonical look good, it made Ubuntu look good, and it definitely has sparked a lot of discussion internally about what we might do with this little web app in the future to ease the process of demoing and training on Ubuntu Server.

And whats even more awesome about working at Canonical? This little web app, awstrial, is open source. Sweet, so anybody can help us out making it better, and even show us more creative ways to use it.


Oct
20
2010
--

New Forum Categories: Help Wanted, For Hire

I’ve just added two categories to our forum, so you can post your job listings if you’re looking for someone to help you, and you can post your qualifications if you are available for hire.


Entry posted by Baron Schwartz |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com