Error Handling

<< Click to Display Table of Contents >>

Navigation:  Streaming SQL Reference Guide >

Error Handling

Previous pageReturn to chapter overviewNext page

Errors can occur while the SQLstream s-Server is evaluating queries. This topic describes the kinds of errors and the different contexts in which they occur, and discusses how the s-Server indicates and handles errors. Of particular interest are errors that occur while a long-running statement executes.

All errors are recorded in the s-Server trace log. You can also access errors through the Global Error Stream.

For a list of error messages see Appendix B - List of Error Messages.

Kinds of Errors

An error indicates that a requested operation has failed. Here "operation" means evaluating or preparing a SQL statement, but can also mean fetching each result from the result set of a query, or each execution of a prepared statement.

Errors can be classified by severity and by context.

A warning means the operation has completed, but in a correct yet somehow unexpected way.
An ordinary error means the operation has failed safely: a DDL or DML command has had no effect, a query has produced no results, and the system is in the state it was in before the requested operation.
A fatal error means that the s-Server has detected an illegal internal state, and must restart itself from its last checkpoint. This is abnormal, and should be reported to SQLstream Inc as a bug.

The error context itself has several dimensions:

the kind of operation that failed: a DDL statement, a DML statement, a table query, or a streaming query.
the stage of statement processing where the error occurs: preparation (divided into parsing, validation, and planning), or execution.
whether the error is detected immediately when the request is made, or later (possibly much later, for a long-running query).

Error Handling

The s-Server's SQL parser and validator detect invalid statements at prepare time. To illustrate this, consider some simple statements that act on the stream orders, defined by

CREATE STREAM orders (

       order_id INTEGER NOT NULL,

       customer_id INTEGER NOT NULL,

       item_count INTEGER,

       total_cost DECIMAL(8,2),

       comment VARCHAR(128));

 

Some examples of prepare time errors are:

a syntax error: SELECT * FROM ORDERS;
an invalid column: SELECT ITEMS_CT FROM ORDERS;
an invalid datatype in an expression: SELECT * FROM ORDERS WHERE (comment > 40);

Errors like these can be detected immediately by the s-Server, which records the error in the s-Server log, and signals the JDBC client driver, which makes the JDBC call fail, throwing a java.sql.SQLException. The s-Server prepares a statement in a context which knows about the JDBC client; this is feasible because preparation is quick.

Errors that occur during statement execution generally result from invalid data, and are tied to a specific row of input. Some examples are:

an inserted row rejected for having a null value for a not-null column.
an inserted row rejected for violating some other column constraint.
an arithmetic error (like divide by zero) evaluating an expression.
arithmetic overflow or underflow evaluating an expression
arithmetic overflow in the accumulator for an aggregate function, such as SUM().

Handling such run-time errors can be tricky, due to the very nature of a distributed, streaming data system. With SQLstream, independent data-source clients insert data into streams, and independent data-reading clients receive result rows computed from the streaming inputs by applying (potentially complex) chains of relational operators.

Suppose an error in execution occurs as one of these operators processes an "offending row". It is easy for the operator to detect the error, but not so easy to determine who to tell, and how. The prepare-time context, which pointed to a live JDBC client thread waiting for news, is gone. Running on a RAMP, the relational operator performs one step in a large computation: it has no idea what SQL statement has failed, and which clients are affected.

The Control Node has that information, so the error message is relayed from the relational operator to the Control Node to the client.

However, this mechanism is not complete in the current s-Server release. Consequently some kinds of execution errors are not reported directly to a client, namely:

an error while executing a prepared INSERT EXPEDITED to insert a row
an error receiving a row from the ResultSet obtained from a SELECT STREAM

The same problem holds with a pump: the client context which created or started the pump is gone, and the pump is executing as a "server daemon" thread.

The workaround for this defect is to check the s-Server log for the pertinent error messages or warnings. (See the next topic regarding how to see errors from a remote client via streams.)

Error Logging

The s-Server records every error and warning with an entry in the s-Server log, using the standard java logging mechanism. Configuration of system tracing is explained in the topic Configuring System Parameters in the SQLstream Administrator Guide. Fatal errors have level FATAL, regular errors have level SEVERE, and warnings have level WARNING. In its default configuration the s-Server also logs major events -- such as the start of execution of any SQL statement -- at level INFO. Thus INFO messages in the log appearing shortly before a SEVERE message often indicate the context of an error.

The logfile adapter can follow the s-Server log and feed selected items into a stream or streams. In principle a SQLstream application could use these streams to recover from some errors.

An Example

As a very simple illustration, consider again the stream:

orders (order_id INTEGER NOT NULL, customer_id INTEGER NOT NULL, item_count INTEGER,

       total_cost DECIMAL(8,2), comment VARCHAR(128));

 

One client thread runs SELECT STREAM ROWTIME, *, (total_cost / item_count) as avg_item_cost FROM orders

Another thread prepares INSERT EXPEDITED INTO orders VALUES(?, ?, ?, ?, ?); and executes it, binding the following series of values:

#

ROWTIME

order_id

customer_id

item_count

total_cost

comment

1

9:00:00

1

100

2

1.98

a valid row

2

9:01:00

2

100

two

1.98

a type error

3

9:02:00

3

null

5

3.00

invalid null

4

9:03:00

4

102

0

0.00

zeros are trouble

5

9:04:00

5

101

7

0.84

a valid row

Here rows 2 and 3 are rejected when the insert is executed. Row 4 requires evaluating (0/0), a run-time error, and so is dropped from the result set. The query returns only rows 1 and 5:

ROWTIME

order_id

customer_id

item_count

total_cost

comment

avg_item_cost

9:00:00

1

100

2

1.98

a valid row

0.99

9:04:00

5

101

7

0.84

a valid row

0.12

All errors are recorded in the s-Server log. Here is an excerpt, including INFO lines that show context, but omitting many irrelevant trace messages (marked by ...):

Aug 10, 2008 4:20:00 AM net.sf.farrago.db.FarragoDbSession prepare

INFO: INSERT INTO sales.orders (ROWTIME, order_id, customer_id, item_count,

                 total_cost, comment)

   VALUES (TIMESTAMP '1999-09-09 9:00:00    1,        100,         2,

                 1.98, 'a valid row')

Aug 10, 2008 4:20:01 AM net.sf.farrago.db.FarragoDbSession prepare

INFO: INSERT INTO sales.orders (ROWTIME, order_id, customer_id, item_count,

                 total_cost, comment)

   VALUES (TIMESTAMP '1999-09-09 9:01:00',    2,      100,       'two',

                 1.98, 'a type error')

Aug 10, 2008 4:20:01 AM org.eigenbase.sql.validate.SqlValidatorException <init>

SEVERE: org.eigenbase.sql.validate.SqlValidatorException:

 Cannot assign to target field 'ITEM_COUNT' of type INTEGER from

                 source field 'EXPR$3' of type CHAR(3)

...

Aug 10, 2008 4:20:01 AM net.sf.farrago.db.FarragoDbSession prepare

INFO: INSERT INTO sales.orders (ROWTIME, order_id, customer_id, item_count,

                 total_cost, comment)

   VALUES (TIMESTAMP '1999-09-09 9:02:00', 3,  null,  5, 3.00, 'invalid null')

Aug 10, 2008 4:20:02 AM net.sf.fennel.xo.FennelCalcRel.#340:671 <native>

WARNING: calculator error [0]:PC=15 Code=22004

...

Aug 10, 2008 4:20:02 AM net.sf.farrago.db.FarragoDbSession prepare

INFO: INSERT INTO sales.orders (ROWTIME, order_id, customer_id, item_count, total_cost, comment)

   VALUES (TIMESTAMP '1999-09-09 9:03:00',   4, 102,    0,   0.00,  'zeros are trouble')

Aug 10, 2008 4:20:03 AM org.eigenbase.util.EigenbaseException <init>

SEVERE: org.eigenbase.util.EigenbaseException: Overflow during calculation or cast.

INFO: INSERT INTO sales.orders (ROWTIME, order_id, customer_id, item_count, total_cost, comment)

   VALUES (TIMESTAMP '1999-09-09 9:04:00',  5,  101,  7,    0.84,  'a valid row')