Django with time zone support and MySQL

Django This is yet another story of Django web-framework with time zone support and pain dealing with python datetimes and MySQL on the backend. In other words, offset-naive vs offset-aware datetimes.

Shortly, more about the problem. After reading the official documentation about the time zones, it makes clear that in order to reflect python datetime in the necessary time zone you need to make it tz-aware first and than show in that time zone.

Here is the first issue: tz-aware in what time zone? MySQL stores timestamps in UTC and converts for storage/retrieval from/to the current time zone. By default, the current time zone is the server’s time, can be changed on MySQL globally, per connection etc. So it becomes not obvious what was tz of the value initially before stored in UTC. If you change server or session tz further, it will lead to more mess. Unlike MySQL, PostgreSQL has timestamp with time zone data type, so Django can auto-detect tz and make datetimes tz-aware automatically featuring tzinfo attribute.

There are many solutions on the web… for example an extension to detect tz on UI by Javascript and pass back to the backend allowing you to work with tz-aware data, however, I need something simpler but mainly with less changes to the existing code base.

Here is my case. The server and MySQL are on UTC. That’s cool and it cuts off the first barrier. I store python datetimes in MySQL timespamp columns also in UTC per database time. Anyway, it is a best practice to store time in UTC. I have some read-only pages on the web app and want to show datetimes according to user’s tz. Looks to be a simple task but dealing with MySQL on backend, all my data coming from models have a naive datetime type assigned. So I need to find a way to easily make all my DateTimeField fields UTC-aware (add tzinfo attribute) and some convenient method of showing datetimes in user’s tz still having an access to UTC or naive datetimes for calculation on the backned. Therefore, I will be still doing all the calculation in UTC and showing TZ-aware values to users only on UI side.

This is an example of middleware that gets user’s tz from the database and sets in the session, so it can be retrieved anywhere using get_current_timezone():

from django.utils.timezone import activate
from myapp.models import UserInfo
class TimezoneMiddleware(object):
    """Middleware that is run on each request before the view is executed.
    Activate user's timezone for further retrieval by get_current_timezone() or
    creating tz-aware datetime objects beforehand.
    def process_request(self, request):
        session_tz = request.session.get('timezone')
        # If tz has been already set in session, let's activate it
        # and avoid SQL query to retrieve it on each request.
        if session_tz:
                # Get user's tz from the database.
                uinfo = UserInfo.objects.get(user_id=request.user.id, user_id__isnull=False)
                if uinfo.timezone:
                    # If tz is configured by user, let's set it for the session.
                    request.session['timezone'] = uinfo.timezone
            except UserInfo.DoesNotExist:

This is an excerpt from models.py:

from django.db import models
from django.utils.timezone import get_current_timezone, make_aware, utc
def localize_datetime(dtime):
    """Makes DateTimeField value UTC-aware and returns datetime string localized
    in user's timezone in ISO format.
    tz_aware = make_aware(dtime, utc).astimezone(get_current_timezone())
    return datetime.datetime.strftime(tz_aware, '%Y-%m-%d %H:%M:%S')
class Messages(models.Model):
    id = models.AutoField(primary_key=True)
    body = models.CharField(max_length=160L)
    created = models.DateTimeField(auto_now_add=True)
    def created_tz(self):
        return localize_datetime(self.created)

“Messages” model has “created” field (timestamp in MySQL) and a property “created_tz”. That property reflects “created” in user’s tz using the function localize_datetime() which makes naive datetimes tz(UTC)-aware, converts into user’s tz set on the session level and returns a string in ISO format. In my case, I don’t prefer the default RFC format that includes +00:00 tz portion of datetime with tzinfo attribute or even need tz-aware datetimes to operate with. Same way I can have similar properties in all needed models knowing they can be accessed by the same name with “_tz” suffix.

Taking into account the above, I reference “created” for calculations in views or controllers and “created_tz” in templaetes or for JSON-output.  This way I don’t need to change all references of “created” to something like “make_aware(created, utc)” or datetime.datetime.utcnow() to datetime.datetime.utcnow().replace(tzinfo=pytz.utc) across the code. The code changes in my app will be minimal by introducing a custom property in the model and continue operating with UTC on the raw level:

# views.py
# Operating UTC
msgs = Messages.objects.filter(created__gt=datetime.datetime.now() -

<! -- HTML template -- >
{% for m in msgs %}
    {{ m.id }}. {{ m.body }} (added on {{ m.created_tz }})
{% endfor %}
* All times in user's tz.

I hope this article may help in your findings.
Happy New Year across all time zones!

The post Django with time zone support and MySQL appeared first on MySQL Performance Blog.


Puppet Camp Report: Two very different days

I attended Puppet Camp in San Francisco this month, thanks to my benevolent employer Canonical’s sponsorship of the event.

It was quite an interesting ride. I’d consider myself an intermediate level puppet user, having only edited existing puppet configurations and used it for proof of concept work, not actual giant deployments. I went in large part to get in touch with users and potential users of Ubuntu Server to see what they think of it now, and what they want out of it in the future. Also Puppet is a really interesting technology that I think will be a key part of this march into the cloud that we’ve all begun.

The state of Puppet

This talk was given by Luke, and was a very frank discussion of where puppet is and where it should be going. He discussed in brief where puppet labs fit in to this discussion as well. In brief, puppet is stable and growing. Upon taking a survey of puppet users, the overwhelming majority are sysadmins, which is no surprise. Debian and Ubuntu have equal share amongst survey respondants, but RHEL and CentOS dominate the playing field.

As for the future, there were a couple of things mentioned. Puppet needs some kind of messaging infrasturcture, and it seems the mCollective will be it. They’re not ready to announce anything, but it seems like a logical choice.  There are also plans for centralized data services to make the data puppet has available to it available to other things.


Given by mCollective’s author, whose name escapes me, this was a live demo of what mCollective can do for you. Its basically a highly scalable messaging framework that is not necessarily tied to puppet. You simply need to write an agent that will subscribe to your messages. Currently only ActiveMQ is supported, but it uses STOMP, so any queueing system that uses STOMP should be able to utilize the same driver.

Once you have these agents consuming messages, one must just become creative at what they can do. He currently has some puppet focused agents and client code to pull data out of puppet and act accordingly. Ultimately, you could do much of this with something like Capistrano and parallel ssh, but this seems to scale well. One audience member boasted that they have over 1000 nodes using mCollective to perform tasks.

The Un-Conference

Puppet Camp took the form of an “un conference”, where there were just a few talks, and a bunch of sessions based on what people wanted to talk about. I didn’t propose anything, as I did not come with an agenda, but I definitely was interested in a few of the topics:

Puppet CA

My colleague at Canonical, Mathias Gug, proposed a discussion of the puppet CA mechanics, and it definitely interested me. Puppet uses the PKI system to verify clients and servers. The default mode of operation is for a new client to contact the configured puppet master, and submit a “CSR” or “Certificate Signing Request” to it. The puppet master administrator then verifies that the CSR is from one of their hosts, and signs it, allowing both sides to communicate with some degree of certainty that the certificates are valid.

Well there’s another option, which is just “autosign”. This works great on a LAN where access is highly guarded, as it no longer requires you to verify that your machine submitted the CSR. However, if you have any doubts about your network security, this is dangerous. An attacker can use this access to download all of your configuration information, which could contain password hashes, hidden hostnames, and any number of other things that you probably don’t want to share.

When you add the cloud to this mix, its even more important that you not just trust any host. IaaS cloud instances come and go all the time, with different hostnames/IP’s and properties. Mathias had actually proposed an enhancement to puppet to add a unique ID attribute for CSR’s made in the cloud, but there was a problem with the ruby OpenSSL library that wouldn’t allow these attributes to be added to the certificate. We discussed possibly generating the certificate beforehand using the openssl binary, but this doesn’t look like it will work w/o code changes to Puppet. I am not sure where we’ll go from there.

Puppet Instrumentation

I’m always interested to see what people are doing to measure their success. I think a lot of times we throw up whatever graph or alert monitoring is pre-packaged with something, and figure we’ve done our part. There wasn’t a real consensus on what were the important things to measure. As usual, sysadmins who are running puppet are pressed for time, and often measurement of their own processes falls by the way side with the pressure to measure everybody else.

Other stuff

There were a number of other sessions and discussions, but none that really jumped out at me. On the second day, an employee from Google’s IT department gave a talk about google’s massive puppet infrastructure. He discussed that it is only used for IT support, not production systems, though he wasn’t able to go into much more detail. Also Twitter gave some info about how they use puppet for their production servers, and there was an interesting discussion about the line between code and infrastructure deployment. This stemmed from a question I asked about why they didn’t use their awesome bittorent based “murder” code distribution system to deploy puppet rules. The end of that was “because murder is for code, and this is infrastructure”.


So this was actually the coolest part of the trip. Early on the second day, during the announcements, the (sometimes hilarious) MC Deepak mentioned that there would be a beginner puppet session later in the day. He asked that attendees to that session try to have a machine ready, so that the prsenter, Dan Bode, could give them some examples to try out.

Some guys on the Canonical server team had been working on a project called “Cloud 10” for the release of Ubuntu 10.10, which was coming in just a couple of days. They had thrown together a django app called awstrial that could be used to fire up EC2 or UEC images for free, for a limited period. The reason for this was to allow people to try Ubuntu Server 10.10 out for an hour on EC2. I immediately wondered though.. “Maybe we could just provide the puppet beginner class with instances to try out!”

Huzzah! I mentioned this to Mathias, and he and I started bugging our team members about getting this setup. That was at 9:00am. By noon, 3 hours later, the app had been installed on a fresh EC2 instance, a DNS pointer had been created pointing to said instance, and the whole thing had been tweaked to reference puppet camp and allow the users to have 3 hours instead of 55 minutes.

As lunch began, Mathias announced that users could go to “puppet.ec42.net” in a browser and use their Launchpad or Ubuntu SSO credentials to spawn an instance.

A while later, when the beginner class started, 25 users had signed on and started instances. Unfortunately, the instances died after 55 minutes due to a bug in the code, but ultimately, the users were able to poke around with these instances and try out stuff Dan was suggesting. This made Canonical look good, it made Ubuntu look good, and it definitely has sparked a lot of discussion internally about what we might do with this little web app in the future to ease the process of demoing and training on Ubuntu Server.

And whats even more awesome about working at Canonical? This little web app, awstrial, is open source. Sweet, so anybody can help us out making it better, and even show us more creative ways to use it.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com