Howto Logging Cookbook
Howto Logging Cookbook
Release 2.7.6
Contents
1 2 3 4 5 6 Using logging in multiple modules Multiple handlers and formatters Logging to multiple destinations Conguration server example Sending and receiving logging events across a network ii iii iv v vi
Adding contextual information to your logging output viii 6.1 Using LoggerAdapters to impart contextual information . . . . . . . . . . . . . . . . . . . . . . . . . viii Using objects other than dicts to pass contextual information . . . . . . . . . . . . . . . . . . . . . . ix 6.2 Using Filters to impart contextual information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Logging to a single le from multiple processes Using le rotation An example dictionary-based conguration xi xi xii xiii xiii
7 8 9
Author Vinay Sajip <vinay_sajip at red-dove dot com> This page contains a number of recipes related to logging, which have been found useful in the past.
module_logger.info(received a call to "some_function") The output looks like this: 2005-03-23 23:47:11,663 - spam_application - INFO creating an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO creating an instance of Auxiliary 2005-03-23 23:47:11,665 - spam_application - INFO created an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,668 - spam_application - INFO calling auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO doing something 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO done doing something 2005-03-23 23:47:11,670 - spam_application - INFO finished auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,671 - spam_application - INFO calling auxiliary_module.some_function() 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO received a call to some_function 2005-03-23 23:47:11,673 - spam_application - INFO done with auxiliary_module.some_function()
logger.warn(warn message) logger.error(error message) logger.critical(critical message) Notice that the application code does not care about multiple handlers. All that changed was the addition and conguration of a new handler named fh. The ability to create new handlers with higher- or lower-severity lters can be very helpful when writing and testing an application. Instead of using many print statements for debugging, use logger.debug: Unlike the print statements, which you will have to delete or comment out later, the logger.debug statements can remain intact in the source code and remain dormant until you need them again. At that time, the only change that needs to happen is to modify the severity level of the logger and/or handler to debug.
: : : :
Jackdaws love my big sphinx of quartz. How quickly daft jumping zebras vex. Jail zesty vixen who grabbed pay from quack. The five boxing wizards jump quickly.
and in the le you will see something like 10-22 10-22 10-22 10-22 10-22 22:19 22:19 22:19 22:19 22:19 root myapp.area1 myapp.area1 myapp.area2 myapp.area2 INFO DEBUG INFO WARNING ERROR Jackdaws love my big sphinx of quartz. Quick zephyrs blow, vexing daft Jim. How quickly daft jumping zebras vex. Jail zesty vixen who grabbed pay from quack. The five boxing wizards jump quickly.
As you can see, the DEBUG message only shows up in the le. The other messages are sent to both destinations. This example uses console and le handlers, but you can use any number and combination of handlers you choose.
# read initial config file logging.config.fileConfig(logging.conf) # create and start listener on port 9999 t = logging.config.listen(9999) t.start() logger = logging.getLogger(simpleExample) try: # loop through logging calls to see the difference # new configurations make, until Ctrl+C is pressed while True: logger.debug(debug message) logger.info(info message) logger.warn(warn message) logger.error(error message) logger.critical(critical message) time.sleep(5) except KeyboardInterrupt: # cleanup logging.config.stopListening() t.join() And here is a script that takes a lename and sends that le to the server, properly preceded with the binary-encoded length, as the new logging conguration: #!/usr/bin/env python import socket, sys, struct
with open(sys.argv[1], rb) as f: data_to_send = f.read() HOST = localhost PORT = 9999 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print(connecting...) s.connect((HOST, PORT)) print(sending config...) s.send(struct.pack(>L, len(data_to_send))) s.send(data_to_send) s.close() print(complete)
This basically logs the record using whatever logging policy is configured locally. """ def handle(self): """ Handle multiple requests - each expected to be a 4-byte length, followed by the LogRecord in pickle format. Logs the record according to whatever policy is configured locally. """ while True: chunk = self.connection.recv(4) if len(chunk) < 4: break slen = struct.unpack(>L, chunk)[0] chunk = self.connection.recv(slen) while len(chunk) < slen: chunk = chunk + self.connection.recv(slen - len(chunk)) obj = self.unPickle(chunk) record = logging.makeLogRecord(obj) self.handleLogRecord(record) def unPickle(self, data): return pickle.loads(data) def handleLogRecord(self, record): # if a name is specified, we use the named logger rather than the one # implied by the record. if self.server.logname is not None: name = self.server.logname else: name = record.name logger = logging.getLogger(name) # N.B. EVERY record gets logged. This is because Logger.handle # is normally called AFTER logger-level filtering. If you want # to do filtering, do it at the client end to save wasting # cycles and network bandwidth! logger.handle(record) class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer): """ Simple TCP socket-based logging receiver suitable for testing. """ allow_reuse_address = 1 def __init__(self, host=localhost, port=logging.handlers.DEFAULT_TCP_LOGGING_PORT, handler=LogRecordStreamHandler): SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler) self.abort = 0 self.timeout = 1 self.logname = None
def serve_until_stopped(self): import select abort = 0 while not abort: rd, wr, ex = select.select([self.socket.fileno()], [], [], self.timeout) if rd: self.handle_request() abort = self.abort def main(): logging.basicConfig( format=%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s) tcpserver = LogRecordSocketReceiver() print(About to start TCP server...) tcpserver.serve_until_stopped() if __name__ == __main__: main() First run the server, and then the client. On the client side, nothing is printed on the console; on the server side, you should see something like: About 59 59 69 69 69 to start TCP server... root INFO myapp.area1 DEBUG myapp.area1 INFO myapp.area2 WARNING myapp.area2 ERROR Jackdaws love my big sphinx of quartz. Quick zephyrs blow, vexing daft Jim. How quickly daft jumping zebras vex. Jail zesty vixen who grabbed pay from quack. The five boxing wizards jump quickly.
Note that there are some security issues with pickle in some scenarios. If these affect you, you can use an alternative serialization scheme by overriding the makePickle() method and implementing your alternative there, as well as adapting the above script to use your alternative serialization.
When you create an instance of LoggerAdapter, you pass it a Logger instance and a dict-like object which contains your contextual information. When you call one of the logging methods on an instance of LoggerAdapter, it delegates the call to the underlying instance of Logger passed to its constructor, and arranges to pass the contextual information in the delegated call. Heres a snippet from the code of LoggerAdapter: def debug(self, msg, *args, **kwargs): """ Delegate a debug call to the underlying logger, after adding contextual information from this adapter instance. """ msg, kwargs = self.process(msg, kwargs) self.logger.debug(msg, *args, **kwargs) The process() method of LoggerAdapter is where the contextual information is added to the logging output. Its passed the message and keyword arguments of the logging call, and it passes back (potentially) modied versions of these to use in the call to the underlying logger. The default implementation of this method leaves the message alone, but inserts an extra key in the keyword argument whose value is the dict-like object passed to the constructor. Of course, if you had passed an extra keyword argument in the call to the adapter, it will be silently overwritten. The advantage of using extra is that the values in the dict-like object are merged into the LogRecord instances __dict__, allowing you to use customized strings with your Formatter instances which know about the keys of the dict-like object. If you need a different method, e.g. if you want to prepend or append the contextual information to the message string, you just need to subclass LoggerAdapter and override process() to do what you need. Here is a simple example: class CustomAdapter(logging.LoggerAdapter): """ This example adapter expects the passed in dict-like object to have a connid key, whose value in brackets is prepended to the log message. """ def process(self, msg, kwargs): return [%s] %s % (self.extra[connid], msg), kwargs which you can use like this: logger = logging.getLogger(__name__) adapter = CustomAdapter(logger, {connid: some_conn_id}) Then any events that you log to the adapter will have the value of some_conn_id prepended to the log messages. Using objects other than dicts to pass contextual information You dont need to pass an actual dict to a LoggerAdapter - you could pass an instance of a class which implements __getitem__ and __iter__ so that it looks like a dict to logging. This would be useful if you want to generate values dynamically (whereas the values in a dict would be constant).
and user as in the LoggerAdapter example above. In that case, the same format string can be used to get similar output to that shown above. Heres an example script: import logging from random import choice class ContextFilter(logging.Filter): """ This is a filter which injects contextual information into the log. Rather than use actual contextual information, we just use random data in this demo. """ USERS = [jim, fred, sheila] IPS = [123.231.231.123, 127.0.0.1, 192.168.0.1] def filter(self, record): record.ip = choice(ContextFilter.IPS) record.user = choice(ContextFilter.USERS) return True
if __name__ == __main__: levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL) logging.basicConfig(level=logging.DEBUG, format=%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User a1 = logging.getLogger(a.b.c) a2 = logging.getLogger(d.e.f) f = ContextFilter() a1.addFilter(f) a2.addFilter(f) a1.debug(A debug message) a1.info(An info message with %s, some parameters) for x in range(10): lvl = choice(levels) lvlname = logging.getLevelName(lvl) a2.log(lvl, A message at %s level with %d %s, lvlname, 2, parameters) which, when run, produces something like: 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 22:38:15,292 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,301 22:38:15,301 22:38:15,301 a.b.c a.b.c d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f DEBUG INFO CRITICAL ERROR DEBUG ERROR CRITICAL CRITICAL DEBUG ERROR DEBUG INFO IP: IP: IP: IP: IP: IP: IP: IP: IP: IP: IP: IP: 123.231.231.123 192.168.0.1 127.0.0.1 127.0.0.1 127.0.0.1 123.231.231.123 192.168.0.1 127.0.0.1 192.168.0.1 127.0.0.1 123.231.231.123 123.231.231.123 User: User: User: User: User: User: User: User: User: User: User: User: fred sheila sheila jim sheila fred jim sheila jim sheila fred fred
A debug message An info message w A message at CRIT A message at ERRO A message at DEBU A message at ERRO A message at CRIT A message at CRIT A message at DEBU A message at ERRO A message at DEBU A message at INFO
8 Using le rotation
Sometimes you want to let a log le grow to a certain size, then open a new le and log to that. You may want to keep a certain number of these les, and when that many les have been created, rotate the les so that the number of les and the size of the les both remain bounded. For this usage pattern, the logging package provides a RotatingFileHandler: import glob import logging import logging.handlers LOG_FILENAME = logging_rotatingfile_example.out # Set up a specific logger with our desired output level my_logger = logging.getLogger(MyLogger) my_logger.setLevel(logging.DEBUG) # Add the log message handler to the logger handler = logging.handlers.RotatingFileHandler( LOG_FILENAME, maxBytes=20, backupCount=5) my_logger.addHandler(handler) # Log some messages for i in range(20): my_logger.debug(i = %d % i) # See what files are created logfiles = glob.glob(%s* % LOG_FILENAME) for filename in logfiles: print(filename) The result should be 6 separate les, each with part of the log history for the application: logging_rotatingfile_example.out logging_rotatingfile_example.out.1 logging_rotatingfile_example.out.2
logging_rotatingfile_example.out.3 logging_rotatingfile_example.out.4 logging_rotatingfile_example.out.5 The most current le is always logging_rotatingfile_example.out, and each time it reaches the size limit it is renamed with the sufx .1. Each of the existing backup les is renamed to increment the sufx (.1 becomes .2, etc.) and the .6 le is erased. Obviously this example sets the log length much too small as an extreme example. You would want to set maxBytes to an appropriate value.
LOGGING = { version: 1, disable_existing_loggers: True, formatters: { verbose: { format: %(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(messag }, simple: { format: %(levelname)s %(message)s }, }, filters: { special: { (): project.logging.SpecialFilter, foo: bar, } }, handlers: { null: { level:DEBUG, class:django.utils.log.NullHandler, }, console:{ level:DEBUG, class:logging.StreamHandler, formatter: simple }, mail_admins: { level: ERROR, class: django.utils.log.AdminEmailHandler, filters: [special] } }, loggers: { django: { handlers:[null], propagate: True, level:INFO,
}, django.request: { handlers: [mail_admins], level: ERROR, propagate: False, }, myproject.custom: { handlers: [console, mail_admins], level: INFO, filters: [special] } } } For more information about this conguration, you can see the relevant section of the Django documentation.
import json import logging class StructuredMessage(object): def __init__(self, message, **kwargs): self.message = message self.kwargs = kwargs def __str__(self): return %s >>> %s % (self.message, json.dumps(self.kwargs)) _ = StructuredMessage # optional, to improve readability
logging.basicConfig(level=logging.INFO, format=%(message)s) logging.info(_(message 1, foo=bar, bar=baz, num=123, fnum=123.456)) If the above script is run, it prints: message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"} Note that the order of items might be different according to the version of Python used. If you need more specialised processing, you can use a custom JSON encoder, as in the following complete example: from __future__ import unicode_literals import json import logging # This next bit is to ensure the script runs unchanged on 2.x and 3.x try: unicode except NameError: unicode = str class Encoder(json.JSONEncoder): def default(self, o): if isinstance(o, set): return tuple(o) elif isinstance(o, unicode): return o.encode(unicode_escape).decode(ascii) return super(Encoder, self).default(o) class StructuredMessage(object): def __init__(self, message, **kwargs): self.message = message self.kwargs = kwargs def __str__(self): s = Encoder().encode(self.kwargs) return %s >>> %s % (self.message, s) _ = StructuredMessage # optional, to improve readability
logging.info(_(message 1, set_value=set([1, 2, 3]), snowman=\u2603)) if __name__ == __main__: main() When the above script is run, it prints: message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]} Note that the order of items might be different according to the version of Python used.