Howto Logging Cookbook
Howto Logging Cookbook
Release 2.7.3
Contents
1 2 3 4 5 6 Using logging in multiple modules Multiple handlers and formatters Logging to multiple destinations Conguration server example Sending and receiving logging events across a network i iii iv v v
Adding contextual information to your logging output viii 6.1 Using LoggerAdapters to impart contextual information . . . . . . . . . . . . . . . . . . . . . . . viii 6.2 Using Filters to impart contextual information . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Logging to a single le from multiple processes Using le rotation An example dictionary-based conguration xi xi xii xiii xiii
7 8 9
Author Vinay Sajip <vinay_sajip at red-dove dot com> This page contains a number of recipes related to logging, which have been found useful in the past.
logger in one module and create (but not congure) a child logger in a separate module, and all logger calls to the child will pass up to the parent. Here is a main module: import logging import auxiliary_module # create logger with spam_application logger = logging.getLogger(spam_application) logger.setLevel(logging.DEBUG) # create file handler which logs even debug messages fh = logging.FileHandler(spam.log) fh.setLevel(logging.DEBUG) # create console handler with a higher log level ch = logging.StreamHandler() ch.setLevel(logging.ERROR) # create formatter and add it to the handlers formatter = logging.Formatter(%(asctime)s - %(name)s - %(levelname)s - %(message)s) fh.setFormatter(formatter) ch.setFormatter(formatter) # add the handlers to the logger logger.addHandler(fh) logger.addHandler(ch) logger.info(creating an instance of auxiliary_module.Auxiliary) a = auxiliary_module.Auxiliary() logger.info(created an instance of auxiliary_module.Auxiliary) logger.info(calling auxiliary_module.Auxiliary.do_something) a.do_something() logger.info(finished auxiliary_module.Auxiliary.do_something) logger.info(calling auxiliary_module.some_function()) auxiliary_module.some_function() logger.info(done with auxiliary_module.some_function()) Here is the auxiliary module: import logging # create logger module_logger = logging.getLogger(spam_application.auxiliary) class Auxiliary: def __init__(self): self.logger = logging.getLogger(spam_application.auxiliary.Auxiliary) self.logger.info(creating an instance of Auxiliary) def do_something(self): self.logger.info(doing something) a = 1 + 1 self.logger.info(done doing something) def some_function(): module_logger.info(received a call to "some_function") The output looks like this: 2005-03-23 23:47:11,663 - spam_application - INFO creating an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO creating an instance of Auxiliary 2005-03-23 23:47:11,665 - spam_application - INFO created an instance of auxiliary_module.Auxiliary 2005-03-23 23:47:11,668 - spam_application - INFO -
calling auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO doing something 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO done doing something 2005-03-23 23:47:11,670 - spam_application - INFO finished auxiliary_module.Auxiliary.do_something 2005-03-23 23:47:11,671 - spam_application - INFO calling auxiliary_module.some_function() 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO received a call to some_function 2005-03-23 23:47:11,673 - spam_application - INFO done with auxiliary_module.some_function()
and in the le you will see something like 10-22 10-22 10-22 10-22 10-22 22:19 22:19 22:19 22:19 22:19 root myapp.area1 myapp.area1 myapp.area2 myapp.area2 INFO DEBUG INFO WARNING ERROR Jackdaws love my big sphinx of quartz. Quick zephyrs blow, vexing daft Jim. How quickly daft jumping zebras vex. Jail zesty vixen who grabbed pay from quack. The five boxing wizards jump quickly.
As you can see, the DEBUG message only shows up in the le. The other messages are sent to both destinations. This example uses console and le handlers, but you can use any number and combination of handlers you choose.
# read initial config file logging.config.fileConfig(logging.conf) # create and start listener on port 9999 t = logging.config.listen(9999) t.start() logger = logging.getLogger(simpleExample) try: # loop through logging calls to see the difference # new configurations make, until Ctrl+C is pressed while True: logger.debug(debug message) logger.info(info message) logger.warn(warn message) logger.error(error message) logger.critical(critical message) time.sleep(5) except KeyboardInterrupt: # cleanup logging.config.stopListening() t.join() And here is a script that takes a lename and sends that le to the server, properly preceded with the binaryencoded length, as the new logging conguration: #!/usr/bin/env python import socket, sys, struct with open(sys.argv[1], rb) as f: data_to_send = f.read() HOST = localhost PORT = 9999 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) print(connecting...) s.connect((HOST, PORT)) print(sending config...) s.send(struct.pack(>L, len(data_to_send))) s.send(data_to_send) s.close() print(complete)
import logging, logging.handlers rootLogger = logging.getLogger() rootLogger.setLevel(logging.DEBUG) socketHandler = logging.handlers.SocketHandler(localhost, logging.handlers.DEFAULT_TCP_LOGGING_PORT) # dont bother with a formatter, since a socket handler sends the event as # an unformatted pickle rootLogger.addHandler(socketHandler) # Now, we can log to the root logger, or any other logger. First the root... logging.info(Jackdaws love my big sphinx of quartz.) # Now, define a couple of other loggers which might represent areas in your # application: logger1 = logging.getLogger(myapp.area1) logger2 = logging.getLogger(myapp.area2) logger1.debug(Quick zephyrs blow, vexing daft Jim.) logger1.info(How quickly daft jumping zebras vex.) logger2.warning(Jail zesty vixen who grabbed pay from quack.) logger2.error(The five boxing wizards jump quickly.) At the receiving end, you can set up a receiver using the SocketServer module. Here is a basic working example: import import import import import pickle logging logging.handlers SocketServer struct
class LogRecordStreamHandler(SocketServer.StreamRequestHandler): """Handler for a streaming logging request. This basically logs the record using whatever logging policy is configured locally. """ def handle(self): """ Handle multiple requests - each expected to be a 4-byte length, followed by the LogRecord in pickle format. Logs the record according to whatever policy is configured locally. """ while True: chunk = self.connection.recv(4) if len(chunk) < 4: break slen = struct.unpack(>L, chunk)[0] chunk = self.connection.recv(slen) while len(chunk) < slen: chunk = chunk + self.connection.recv(slen - len(chunk)) obj = self.unPickle(chunk) record = logging.makeLogRecord(obj) self.handleLogRecord(record)
def unPickle(self, data): return pickle.loads(data) def handleLogRecord(self, record): # if a name is specified, we use the named logger rather than the one # implied by the record. if self.server.logname is not None: name = self.server.logname else: name = record.name logger = logging.getLogger(name) # N.B. EVERY record gets logged. This is because Logger.handle # is normally called AFTER logger-level filtering. If you want # to do filtering, do it at the client end to save wasting # cycles and network bandwidth! logger.handle(record) class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer): """ Simple TCP socket-based logging receiver suitable for testing. """ allow_reuse_address = 1 def __init__(self, host=localhost, port=logging.handlers.DEFAULT_TCP_LOGGING_PORT, handler=LogRecordStreamHandler): SocketServer.ThreadingTCPServer.__init__(self, (host, port), handler) self.abort = 0 self.timeout = 1 self.logname = None def serve_until_stopped(self): import select abort = 0 while not abort: rd, wr, ex = select.select([self.socket.fileno()], [], [], self.timeout) if rd: self.handle_request() abort = self.abort def main(): logging.basicConfig( format=%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s) tcpserver = LogRecordSocketReceiver() print(About to start TCP server...) tcpserver.serve_until_stopped() if __name__ == __main__: main() First run the server, and then the client. On the client side, nothing is printed on the console; on the server side, you should see something like: About to start TCP server... 59 root INFO 59 myapp.area1 DEBUG Jackdaws love my big sphinx of quartz. Quick zephyrs blow, vexing daft Jim.
How quickly daft jumping zebras vex. Jail zesty vixen who grabbed pay from quack. The five boxing wizards jump quickly.
Note that there are some security issues with pickle in some scenarios. If these affect you, you can use an alternative serialization scheme by overriding the makePickle() method and implementing your alternative there, as well as adapting the above script to use your alternative serialization.
the extra context information repository passed to a LoggerAdapter. """ def __getitem__(self, name): """ To allow this instance to look like a dict. """ from random import choice if name == ip: result = choice([127.0.0.1, 192.168.0.1]) elif name == user: result = choice([jim, fred, sheila]) else: result = self.__dict__.get(name, ?) return result def __iter__(self): """ To allow iteration over keys, which will be merged into the LogRecord dict before formatting and output. """ keys = [ip, user] keys.extend(self.__dict__.keys()) return keys.__iter__()
if __name__ == __main__: from random import choice levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITI a1 = logging.LoggerAdapter(logging.getLogger(a.b.c), { ip : 123.231.231.123, user : sheila }) logging.basicConfig(level=logging.DEBUG, format=%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s a1.debug(A debug message) a1.info(An info message with %s, some parameters) a2 = logging.LoggerAdapter(logging.getLogger(d.e.f), ConnInfo()) for x in range(10): lvl = choice(levels) lvlname = logging.getLevelName(lvl) a2.log(lvl, A message at %s level with %d %s, lvlname, 2, parameters) When this script is run, the output should look something like this: 2008-01-18 2008-01-18 2008-01-18 2008-01-18 2008-01-18 2008-01-18 2008-01-18 2008-01-18 2008-01-18 2008-01-18 2008-01-18 2008-01-18 14:49:54,023 14:49:54,023 14:49:54,023 14:49:54,033 14:49:54,033 14:49:54,033 14:49:54,033 14:49:54,033 14:49:54,033 14:49:54,033 14:49:54,033 14:49:54,033 a.b.c a.b.c d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f DEBUG INFO CRITICAL INFO WARNING ERROR ERROR WARNING WARNING INFO WARNING WARNING IP: IP: IP: IP: IP: IP: IP: IP: IP: IP: IP: IP: 123.231.231.123 123.231.231.123 192.168.0.1 192.168.0.1 192.168.0.1 127.0.0.1 127.0.0.1 192.168.0.1 192.168.0.1 192.168.0.1 192.168.0.1 127.0.0.1 User: User: User: User: User: User: User: User: User: User: User: User: sheila sheila jim jim sheila fred sheila sheila jim fred sheila jim
A debug messag An info messag A message at C A message at I A message at W A message at E A message at E A message at W A message at W A message at I A message at W A message at W
if __name__ == __main__: levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITIC logging.basicConfig(level=logging.DEBUG, format=%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s U a1 = logging.getLogger(a.b.c) a2 = logging.getLogger(d.e.f) f = ContextFilter() a1.addFilter(f) a2.addFilter(f) a1.debug(A debug message) a1.info(An info message with %s, some parameters) for x in range(10): lvl = choice(levels) lvlname = logging.getLevelName(lvl) a2.log(lvl, A message at %s level with %d %s, lvlname, 2, parameters) which, when run, produces something like: 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 2010-09-06 22:38:15,292 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 22:38:15,300 a.b.c a.b.c d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f d.e.f DEBUG INFO CRITICAL ERROR DEBUG ERROR CRITICAL CRITICAL DEBUG IP: IP: IP: IP: IP: IP: IP: IP: IP: 123.231.231.123 192.168.0.1 127.0.0.1 127.0.0.1 127.0.0.1 123.231.231.123 192.168.0.1 127.0.0.1 192.168.0.1 User: User: User: User: User: User: User: User: User: fred sheila sheila jim sheila fred jim sheila jim
A debug messag An info messag A message at C A message at E A message at D A message at E A message at C A message at C A message at D
2010-09-06 22:38:15,301 d.e.f ERROR 2010-09-06 22:38:15,301 d.e.f DEBUG 2010-09-06 22:38:15,301 d.e.f INFO
IP: 127.0.0.1 User: sheila IP: 123.231.231.123 User: fred IP: 123.231.231.123 User: fred
8 Using le rotation
Sometimes you want to let a log le grow to a certain size, then open a new le and log to that. You may want to keep a certain number of these les, and when that many les have been created, rotate the les so that the number of les and the size of the les both remain bounded. For this usage pattern, the logging package provides a RotatingFileHandler: import glob import logging import logging.handlers LOG_FILENAME = logging_rotatingfile_example.out # Set up a specific logger with our desired output level my_logger = logging.getLogger(MyLogger) my_logger.setLevel(logging.DEBUG) # Add the log message handler to the logger handler = logging.handlers.RotatingFileHandler( LOG_FILENAME, maxBytes=20, backupCount=5) my_logger.addHandler(handler) # Log some messages for i in range(20): my_logger.debug(i = %d % i) # See what files are created logfiles = glob.glob(%s* % LOG_FILENAME) for filename in logfiles: print(filename) The result should be 6 separate les, each with part of the log history for the application:
logging_rotatingfile_example.out logging_rotatingfile_example.out.1 logging_rotatingfile_example.out.2 logging_rotatingfile_example.out.3 logging_rotatingfile_example.out.4 logging_rotatingfile_example.out.5 The most current le is always logging_rotatingfile_example.out, and each time it reaches the size limit it is renamed with the sufx .1. Each of the existing backup les is renamed to increment the sufx (.1 becomes .2, etc.) and the .6 le is erased. Obviously this example sets the log length much too small as an extreme example. You would want to set maxBytes to an appropriate value.
LOGGING = { version: 1, disable_existing_loggers: True, formatters: { verbose: { format: %(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(mes }, simple: { format: %(levelname)s %(message)s }, }, filters: { special: { (): project.logging.SpecialFilter, foo: bar, } }, handlers: { null: { level:DEBUG, class:django.utils.log.NullHandler, }, console:{ level:DEBUG, class:logging.StreamHandler, formatter: simple }, mail_admins: { level: ERROR, class: django.utils.log.AdminEmailHandler, filters: [special] } }, loggers: { django: { handlers:[null], propagate: True, level:INFO, },
django.request: { handlers: [mail_admins], level: ERROR, propagate: False, }, myproject.custom: { handlers: [console, mail_admins], level: INFO, filters: [special] } } } For more information about this conguration, you can see the relevant section of the Django documentation.
self.message = message self.kwargs = kwargs def __str__(self): return %s >>> %s % (self.message, json.dumps(self.kwargs)) _ = StructuredMessage # optional, to improve readability
logging.basicConfig(level=logging.INFO, format=%(message)s) logging.info(_(message 1, foo=bar, bar=baz, num=123, fnum=123.456)) If the above script is run, it prints: message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"} Note that the order of items might be different according to the version of Python used. If you need more specialised processing, you can use a custom JSON encoder, as in the following complete example: from __future__ import unicode_literals import json import logging # This next bit is to ensure the script runs unchanged on 2.x and 3.x try: unicode except NameError: unicode = str class Encoder(json.JSONEncoder): def default(self, o): if isinstance(o, set): return tuple(o) elif isinstance(o, unicode): return o.encode(unicode_escape).decode(ascii) return super(Encoder, self).default(o) class StructuredMessage(object): def __init__(self, message, **kwargs): self.message = message self.kwargs = kwargs def __str__(self): s = Encoder().encode(self.kwargs) return %s >>> %s % (self.message, s) _ = StructuredMessage # optional, to improve readability
def main(): logging.basicConfig(level=logging.INFO, format=%(message)s) logging.info(_(message 1, set_value=set([1, 2, 3]), snowman=\u2603)) if __name__ == __main__: main() When the above script is run, it prints: message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]} Note that the order of items might be different according to the version of Python used.