Usually, we don’t give too much (if any) attention to the logging subsystem when we work on the maintenance of an existing software product. Older code bases are using their own logging subsystem implementation, instead of the existing logging frameworks. In the case of the systems implemented in C++ language, this doesn’t come as a surprise – as there were no standardized and stable frameworks back in time. Boost set of C++ libraries added one from version 1.54 in 2013.
As developers, we should use the existing solutions to well-solved problems: our lives would be easier if we relied on the solutions of those who’ve put a lot of time and thoughts into it, rather than reinventing that wheel. We specialize in writing software for our problem domain, not in writing loggers.
Then undeniably, logging is useful when we want to debug an application, in crash analysis, when we’re solving unexpected behavior due to user input, checking performance or statistics. Usually, we log user inputs and actions, time and space data, error messages, counters, and many more. We’d also want to log only important things in order to avoid chaos: if there’s a lot of logging records with all kind of information, it’s very hard to track workflow and spot culprits.
With that said, logging records should be created when events happen: user actions and input, subsystem triggers, exceptions or time-based events. Often, we output it to console, file and network, but we could also show it in the system tray, use sound alarm etc.
Boost.Log library design is organized in three separate layers to achieve modularity, so it could be easily configured and customized with user-specific extensions. As shown in the picture below, the data collection layer is on the left side, the data processing layer is on the right side, while the interconnection layer with core logic is in the middle. Each layer has its own responsibilities and deals with different attribute sets, as data flows from the source on the left side to the storage on the right side.
This blog will cover the basic functionality with global loggers as a source, as well as common sink backends such as console, file and syslog.
To initiate logging in a traditional way, we should use the global loggers provided by the library. They can be easily customized when creating log records, using different features for specific purposes. In the code snippet below, we can see a definition of global logger with severity levels which could be used later – in filtering or formatting phase. This means that not every log record message will end up in log storage.
Global loggers can also support channeling, exception handling or combinations of mentioned features. Apart from global loggers, log messages could be constructed, for example, from embedded class loggers – even for different instances of a class, child process console output or network data.
In the previous example, we have seen how to enrich log record with severity level information. The general way to enrich log record is to use named attributes: each attribute represents a function, and its result is then called attribute value. These values can also be used in filtering or formatting phases. There are three different attribute sets, differentiated based on their scope: global, thread and source-specific.
If we define the global attribute, it is attached to every log record, from all kind of sources. On the other hand, thread-specific attributes participate in log records created in a specific thread, while source-specific attributes participate, for example, in an instance of the class only. When the log record is processed, these attributes are all combined together; if there are duplicate attribute names, then the global one has the lowest priority, while the source-specific has the highest.
In the code snippet below, we can see how to define the global attribute, set its name and bind it to a particular function. There are predefined functions: for example, those used for timestamp, duration, named scope, current process ID, current process name, current thread ID and many more. Of course, we can add any custom definition – for example, a random number attribute.
As we have already mentioned, some of the log records will not end up in log storage. The basic task of logging core is to provide log record filtering, i.e. it decides if messages will be passed further to the specific sink or discarded. Filtering can be done on a global level and on a sink-specific level; regardless of which logging source created it, it is done on the provided attribute set and is performed only once.
Apart from filtering, logging core also has a role in maintaining global and thread-specific attribute sets, dispatching log records, providing exception handlers and in the flush method, used for synchronization with all sinks in usage.
In the code snippet below, we can see how to set filter as a conditional expression using the severity level and custom attribute on the global logger source and the console output as the sink type.
In the Boost.Log library design, the sinks are split in two parts: the frontend and the backend. Common functionalities, such as filtering, are implemented in the frontend, while actual processing of the log records is done in the backend. Frontends are usually used “out of the box”, while backends could be extended for special purposes, although all common types are already implemented by the library.
When the log record is passed to the sink, we can use its message and apply formatting before it’s put into the log storage. We can define specific formatting logic for each sink that accepted the record.
In the code snippet below, we can see how to set a formatter on a given sink. We can even use conditional expressions along with the stream operator. In this example, we can set different text colors depending on severity level, insert timestamp with custom date and time format, etc.
Common functionalities and services shared among all sinks are implemented in sink frontends, and these include formatting, filtering, exception handling and thread synchronization. Sink frontends also define how logging core interacts with the sink backend, where the log record processing is actually done. In order to be able to operate, logging core will need to register both of them.
Speaking of synchronization, the library implements three different types of sink frontends: unlocked (single-threaded applications only), synchronous and asynchronous. When using asynchronous sink frontend, the synchronization is done on the backend and all log records are passed using a dedicated thread that enables backends to block messages and use record queues if needed. We can customize asynchronous_sink class template so it can be instantiated with the following record queueing strategies:
Bounded queues have the following overflow strategies:
The library implements various types of sink backends, for example: text stream, text file, text multi-file, text IPC message queue, syslog, Windows debugger output, Windows event log.
Text file backend is interesting because it of its extended set of features:
All parameters can be set upon backend construction, as it’s shown in code example below. File rotation will happen after whatever condition is fulfilled first.
Console sink backend logger with text coloring and DEBUG severity level threshold
File text backend logger with INFO as severity level threshold
Syslog backend logger with INFO as severity level threshold
In this blog, we’ve learned how to implement the most common application logging requirements with Boost.Log library. It took less than 600 lines of code in the running example application to configure three different sink backends, with support for many features. The process is quite helpful if we want to replace the logging subsystem on the existing applications with their own implementation in a reasonable amount of time, as well as enrich it with a plethora of features.