Splunking Drupal


Drupal developers rely on various logging systems to troubleshoot and investigate Drupal exceptions and errors. We can use frontend Apache/Nginx access and error logs along with native PHP logs. But, unquestionably, the Drupal database logging module is at the core of these logging frameworks because it captures all triggers to internal Drupal hooks, events, and activities occurring on the Drupal site. Drupal's database logging framework is the bucket that captures not only the operations of the different modules (core and contributed) but also includes usage and performance data; therefore, it is an indispensible debugging tool and perhaps the Drupal core report at <drupal-site>/admin/reports/dblog is the most visited page for developers and site admins to ensure that the Drupal site is operating properly. As the Drupal site is prepped to go live, for performance reasons it is a common best practice to turn off database logging and route its traffic to Syslog instead. The Syslog logging format is identical to database logging, just a different output framework. But regardless of the output framework, database or Syslog, relying on either for troubleshooting purposes is a painstaking approach. The database report is a primitive interface that hasn't evolved much over the Drupal releases. While we can filter on message type and severity, we can't plot these over a timeline, or search on message text, or filter on source IP, or identify the URI that triggered a given message. We can export the data to an Excel spreadsheet where it can be sliced and diced, but that is just unproductive and offline. Moreover, while in Syslog format, we rely on even further primitive techniques to “grep” and “tail” the file hoping that we identify a pattern.

Over the past few years, I've attended a number of different Drupal conferences and I do not recall in any of these events that I've heard the term “Splunk” (whether in a keynote event or a session) though the technology is a renowned and can remedy many of the flaws mentioned above, specifically if the Drupal site is complex in nature with a large volume of hits and visits. You can Google "Splunk Software" but its simplest definition is: a software platform designed to parse, index, search, analyze, and visualize machine data gathered from a website, application, sensors, devices, appliances, etc. In our case, we will be focusing on Drupal generated logging data.

Just to illustrate the power of Splunk, I've ingested the Syslog file for a Drupal site in Splunk. With some regex, Splunk parsed its separate fields and add them to other intrinsic Splunk fields:
  • base_url -> base URL of the site.
  • timestamp -> Unix timestamp of the log entry.
  • type -> category t which this message belongs.
  • ip -> IP address of the user triggering the message the message.
  • request_uri -> the requested URI
  • referrer -> HTTP referrer.
  • uid -> user id triggering the message.
  • link -> Link associated with the message.
  • message -> message text stored in the log. 

Once in Splunk, we can slice and dice the data using its search processing language. The use cases are myriad, but below are a couple of typical use cases:

Identify request_uris generating most errors of type "warning"


Identify IPs with most invalid login attempts and identify source countries.


That second example (plotting failed login attempts) can be a correlation search that generates a notable event when, for example, multiple invalid login attempts are generated from a geographical region and we can be more creative and throw in the midst the frontend Apache/Nginx access logs in a different Splunk sourcetype and perhaps join the events on request_id.
It's worth noting that Splunk isn’t just for parsing and indexing technical logs (Syslog, etc.), but can parse and index application logs and provide business insight. For example, if we have a Drupal e-commerce site, we can capture the completed/successful e-commerce transactions and plot these on a Splunk dashboard and analyze things like:
  • Sales by geographical region/country.
  • Sales by product category.
  • Sales by product.
  • Monthly/quarterly/yearly sales comparison.
  • Sales by payment types.


Granted, that many of the deployed Drupal sites are hosted in the cloud with providers like Acquia and Pantheon and Syslog data is accessible via offline rsync jobs, but hosting facilities are catching up and have made inputs.conf (Splunk lingo for configurations to monitor logs) as illustrated with Acquia and Pantheon; with that approach, data is ingested, parsed, and indexed in Splunk as live events with no delays.  
So to recap, download Splunk Enterprise, it is free and you can index up to half a gig with the trial version. That can be sufficient for a small site. The Syslog I uploaded for this blog is 250MB for a day's worth of log entries and the site is not heavily visited (few hundred to low thousand hits per day) and when we add access and PHP logs, volume can quickly increase. Yet, as Drupal developers/site admins for major Drupal sites, Splunk is an indispensible tool.
Thanks for reading.

Subscribe to Our Newsletter

Stay In Touch