Troubleshooting Tips for Splunk DB Connect 3


Hello, fellow Splunkers! In this post, we'll take a look at a few of the pitfalls to be aware of when using the quite powerful, but sometimes hard to troubleshoot, DB Connect (DBX) 3.1.1 application from Splunkbase. So let's jump right in, shall we?

NULL values for Timestamp

If a database table that you're importing (either rising column or batch mode) contains a column representing a timestamp, most likely you'll want to utilize that column as the timestamp of the event (database record) within Splunk, instead of using the index time as the timestamp. However, be careful! If there happen be to NULL values for that column within any record, the import process will ultimately fail and you will see this cryptic message in the DBX logs:

2017-11-30 15:42:48.057 -0500  [QuartzScheduler_Worker-8] ERROR org.easybatch.core.job.BatchJob - Unable to read next record
java.lang.NullPointerException: null
        at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.readRecord(
        at org.easybatch.core.job.BatchJob.readRecord(
        at org.easybatch.core.job.BatchJob.readAndProcessBatch(
        at org.easybatch.extensions.quartz.Job.execute(
        at org.quartz.simpl.SimpleThreadPool$
2017-11-30 15:42:48.058 -0500  [QuartzScheduler_Worker-8] INFO  org.easybatch.core.job.BatchJob - Job '<name_of_your_dbx_input>' finished with status: FAILED
That stacktrace doesn't really tell us the reason why it's "Unable to read next record" and why the import "FAILED", does it? Well, it turns out that this error manifests itself when there happen to be NULL values in the timestamp column.
In order to prevent this from happening, the solution is quite simple. All you need to do is add the following to the WHERE clause in your query:
<name_of_timestamp_column> IS NOT NULL
You will not end up importing those records with a NULL timestamp, but those records were the reason the import was failing anyway!
Recognizing a Table
Sometimes, you will need to write somewhat complex queries that may use variables and temp tables and such. Unfortunately, DBX is not robust enough at this point to always recognize these temp tables as actual tables from which it can derive a rising column and timestamp. Thus, if you run into a situation where the DBX UI is not allowing you to choose a rising column from a temp table, add the following stanza to your query:
SELECT rising_col, timestamp_col, other_col1, other_col2 FROM
) AS query1
This way, DBX will recognize your declaration of the temp table (within the inner query) as "query1" and be able to extract the necessary rising and timestamp columns from it.
Problems with Rising Column
Rising column is obviously a very powerful feature of DBX that allows Splunk to incrementally import new database records. However, there are a couple of pitfalls to be aware of when trying to implement a rising column.
First of all, if the rising column is intended to be an integer, you need to double check that it actually is an integer and not a string (populated with numeric values). Because, if it is indeed a string, the ASCII values of each character in the string will not compare to each other the same way that an integer compares to other integers, and your rising column will not work the way that you intend it to. Records will not be imported in the incremental way that you are expecting, which defeats the whole purpose of utilizing the rising column!
In addition, be careful of using timestamps as the rising column. I know that sometimes we are left with no choice when we are desperately seeking a rising column, but it could be suboptimal, as discussed in this forum:
Hopefully you found this article to be helpful. If you have any questions, please leave a comment below.
Happy Splunking!


Subscribe to Our Newsletter

Stay In Touch