Firebird Documentation Index → Firebird 3.0 Language Reference

The source of much copied reference material: Paul Vinkenoog
Copyright © 2017-2023 Firebird Project and all contributing authors, under the Public Documentation License Version 1.0. Please refer to the License Notice in the Appendix

1. About the Firebird 3.0 Language Reference

This Language Reference decribes the SQL language supported by Firebird 3.0.

This Firebird 3.0 Language Reference is the second comprehensive manual to cover all aspects of the query language used by developers to communicate, through their applications, with the Firebird relational database management system.

1.1. Subject

The subject of this volume is wholly Firebird’s implementation of the SQL relational database language. Firebird conforms closely with international standards for SQL, from data type support, data storage structures, referential integrity mechanisms, to data manipulation capabilities and access privileges. Firebird also implements a robust procedural language — procedural SQL (PSQL) — for stored procedures, triggers and dynamically-executable code blocks. These are the areas addressed in this volume.

This document does not cover configuration of Firebird, Firebird command-line tools, nor its programming APIs.

1.2. Authorship

For the Firebird 3.0 version, the Firebird 2.5 Language Reference was taken as the base, and Firebird 3.0 information was added based on the Firebird 3.0 release notes, feature documentation, and the Russian Firebird 3.0 Language Reference. This document, however, is not a direct translation of the Russian Firebird 3.0 Language Reference.

1.2.1. Contributors

Direct Content
  • Dmitry Filippov (writer)

  • Alexander Karpeykin (writer)

  • Alexey Kovyazin (writer, editor)

  • Dmitry Kuzmenko (writer, editor)

  • Denis Simonov (writer, editor)

  • Paul Vinkenoog (writer, designer)

  • Dmitry Yemanov (writer)

  • Mark Rotteveel (writer)

Resource Content
  • Adriano dos Santos Fernandes

  • Alexander Peshkov

  • Vladyslav Khorsun

  • Claudio Valderrama

  • Helen Borrie

  • and others

1.3. Acknowledgments

Sponsors and Other Donors

See also the Firebird 2.5 Language Reference for its donors.

Sponsors of the Russian Language Reference Manual
  • Moscow Exchange (Russia)

    Moscow Exchange is the largest exchange holding in Russia and Eastern Europe, founded on December 19, 2011, through the consolidation of the MICEX (founded in 1992) and RTS (founded in 1995) exchange groups. Moscow Exchange ranks among the world’s top 20 exchanges by trading in bonds and by the total capitalization of shares traded, as well as among the 10 largest exchange platforms for trading derivatives.

  • IBSurgeon (ibase.ru) (Russia)

    Technical support and developer of administrator tools for the Firebird DBMS.

1.4. Contributing

There are several ways you can contribute to the documentation of Firebird, or Firebird in general:

2. SQL Language Structure

This reference describes the SQL language supported by Firebird.

2.1. Background to Firebird’s SQL Language

To begin, a few points about some characteristics that are in the background to Firebird’s language implementation.

2.1.1. SQL Flavours

Distinct subsets of SQL apply to different sectors of activity. The SQL subsets in Firebird’s language implementation are:

  • Dynamic SQL (DSQL)

  • Procedural SQL (PSQL)

  • Embedded SQL (ESQL)

  • Interactive SQL (ISQL)

Dynamic SQL is the major part of the language which corresponds to the Part 2 (SQL/Foundation) part of the SQL specification. DSQL represents statements passed by client applications through the public Firebird API and processed by the database engine.

Procedural SQL augments Dynamic SQL to allow compound statements containing local variables, assignments, conditions, loops and other procedural constructs. PSQL corresponds to the Part 4 (SQL/PSM) part of the SQL specifications. Originally, PSQL extensions were available in persistent stored modules (procedures and triggers) only, but in more recent releases they were surfaced in Dynamic SQL as well (see EXECUTE BLOCK).

Embedded SQL defines the DSQL subset supported by Firebird gpre, the application which allows you to embed SQL constructs into your host programming language (C, C++, Pascal, Cobol, etc.) and preprocess those embedded constructs into the proper Firebird API calls.

Only a portion of the statements and expressions implemented in DSQL are supported in ESQL.

Interactive ISQL refers to the language that can be executed using Firebird isql, the command-line application for accessing databases interactively. As a regular client application, its native language is DSQL. It also offers a few additional commands that are not available outside its specific environment.

Both DSQL and PSQL subsets are completely presented in this reference. Neither ESQL nor ISQL flavours are described here unless mentioned explicitly.

2.1.2. SQL Dialects

SQL dialect is a term that defines the specific features of the SQL language that are available when accessing a database. SQL dialects can be defined at the database level and specified at the connection level. Three dialects are available:

  • Dialect 1 is intended solely to allow backward comptibility with legacy databases from very old InterBase versions, v.5 and below. Dialect 1 databases retain certain language features that differ from Dialect 3, the default for Firebird databases.

    • Date and time information are stored in a DATE data type. A TIMESTAMP data type is also available, that is identical to this DATE implementation.

    • Double quotes may be used as an alternative to apostrophes for delimiting string data. This is contrary to the SQL standard — double quotes are reserved for a distinct syntactic purpose both in standard SQL and in Dialect 3. Double-quoting strings is therefore to be avoided strenuously.

    • The precision for NUMERIC and DECIMAL data types is smaller than in Dialect 3 and, if the precision of a fixed decimal number is greater than 9, Firebird stores it internally as a long floating point value.

    • The BIGINT (64-bit integer) data type is not supported.

    • Identifiers are case-insensitive and must always comply with the rules for regular identifiers — see the section Identifiers below.

    • Although generator values are stored as 64-bit integers, a Dialect 1 client request, SELECT GEN_ID (MyGen, 1), for example, will return the generator value truncated to 32 bits.

  • Dialect 2 is available only on the Firebird client connection and cannot be set in the database. It is intended to assist debugging of possible problems with legacy data when migrating a database from dialect 1 to 3.

  • In Dialect 3 databases,

    • numbers (DECIMAL and NUMERIC data types) are stored internally as long fixed point values (scaled integers) when the precision is greater than 9.

    • The TIME data type is available for storing time-of-day data only.

    • The DATE data type stores only date information.

    • The 64-bit integer data type BIGINT is available.

    • Double quotes are reserved for delimiting non-regular identifiers, enabling object names that are case-sensitive or that do not meet the requirements for regular identifiers in other ways.

    • All strings must be delimited with single quotes (apostrophes).

    • Generator values are stored as 64-bit integers.

Use of Dialect 3 is strongly recommended for newly developed databases and applications. Both database and connection dialects should match, except under migration conditions with Dialect 2.

This reference describes the semantics of SQL Dialect 3 unless specified otherwise.

2.1.3. Error Conditions

Processing of every SQL statement either completes successfully or fails due to a specific error condition. Error handling can be done as in the client the application and on the server side using PSQL.

2.2. Basic Elements: Statements, Clauses, Keywords

The primary construct in SQL is the statement. A statement defines what the database management system should do with a particular data or metadata object. More complex statements contain simpler constructs — clauses and options.

Clauses

A clause defines a certain type of directive in a statement. For instance, the WHERE clause in a SELECT statement and in some other data manipulation statements (UPDATE, DELETE) specifies criteria for searching one or more tables for the rows that are to be selected, updated or deleted. The ORDER BY clause specifies how the output data — result set — should be sorted.

Options

Options, being the simplest constructs, are specified in association with specific keywords to provide qualification for clause elements. Where alternative options are available, it is usual for one of them to be the default, used if nothing is specified for that option. For instance, the SELECT statement will return all of the rows that match the search criteria unless the DISTINCT option restricts the output to non-duplicated rows.

Keywords

All words that are included in the SQL lexicon are keywords. Some keywords are reserved, meaning their usage as identifiers for database objects, parameter names or variables is prohibited in some or all contexts. Non-reserved keywords can be used as identifiers, although it is not recommended. From time to time, non-reserved keywords may become reserved when some new language feature is introduced.

For instance, the following statement will be executed without errors because, although ABS is a keyword, it is not a reserved word.

CREATE TABLE T (ABS INT NOT NULL);

On the contrary, the following statement will return an error because ADD is both a keyword and a reserved word.

CREATE TABLE T (ADD INT NOT NULL);

Refer to the list of reserved words and keywords in the chapter Reserved Words and Keywords.

2.3. Identifiers

All database objects have names, often called identifiers. The maximum identifier length is 31 bytes. Two types of names are valid as identifiers: regular names, similar to variable names in regular programming languages, and delimited names that are specific to SQL. To be valid, each type of identifier must conform to a set of rules, as follows:

2.3.1. Rules for Regular Object Identifiers

  • Length cannot exceed 31 characters

  • The name must start with an unaccented, 7-bit ASCII alphabetic character. It may be followed by other 7-bit ASCII letters, digits, underscores or dollar signs. No other characters, including spaces, are valid. The name is case-insensitive, meaning it can be declared and used in either upper or lower case. Thus, from the system’s point of view, the following names are the same:

    fullname
    FULLNAME
    FuLlNaMe
    FullName
Regular name syntax
<name> ::=
  <letter> | <name><letter> | <name><digit> | <name>_ | <name>$

<letter> ::= <upper letter> | <lower letter>

<upper letter> ::= A | B | C | D | E | F | G | H | I | J | K | L | M |
                   N | O | P | Q | R | S | T | U | V | W | X | Y | Z

<lower letter> ::= a | b | c | d | e | f | g | h | i | j | k | l | m |
                   n | o | p | q | r | s | t | u | v | w | x | y | z

<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9

2.3.2. Rules for Delimited Object Identifiers

  • Length cannot exceed 31 bytes. Identifiers are stored in character set UNICODE_FSS, which means characters outside the ASCII range are stored using 2 or 3 bytes.

  • The entire string must be enclosed in double-quotes, e.g. "anIdentifier"

  • It may contain any character from the UNICODE_FSS character set, including accented characters, spaces and special characters

  • An identifier can be a reserved word

  • Delimited identifiers are case-sensitive in all contexts

  • Trailing spaces in delimited names are removed, as with any string constant

  • Delimited identifiers are available in Dialect 3 only. For more details on dialects, see SQL Dialects

Delimited name syntax
<delimited name> ::= "<permitted_character>[<permitted_character> ...]"

A delimited identifier such as "FULLNAME" is the same as the regular identifiers FULLNAME, fullname, FullName, and so on. The reason is that Firebird stores regular identifiers in upper case, regardless of how they were defined or declared. Delimited identifiers are always stored according to the exact case of their definition or declaration. Thus, "FullName" (quoted) is different from FullName (unquoted, i.e. regular) which is stored as FULLNAME in the metadata.

2.4. Literals

Literals are used to directly represent data. Examples of standard types of literals are:

integer        - 0, -34, 45, 0X080000000;
fixed-point    - 0.0, -3.14
floating-point - 3.23e-23;
string         - 'text', 'don''t!';
binary string  - x'48656C6C6F20776F726C64'
date           - DATE '2018-01-19';
time           - TIME '15:12:56';
timestamp      - TIMESTAMP '2018-01-19 13:32:02';
boolean        - true, false, unknown
null state     - null

Details about handling the literals for each data type are discussed in the next chapter, Data Types and Subtypes.

2.5. Operators and Special Characters

A set of special characters is reserved for use as operators or separators.

<special char> ::=
    <space> | " | % | & | ' | ( | ) | * | + | , | -
  | . | / | : | ; | < | = | > | ? | [ | ] | ^ | { | }

Some of these characters, alone or in combinations, may be used as operators (arithmetical, string, logical), as SQL command separators, to quote identifiers and to mark the limits of string literals or comments.

Operator Syntax
<operator> ::=
    <string concatenation operator>
  | <arithmetic operator>
  | <comparison operator>
  | <logical operator>

<string concatentation operator> ::= "||"

<arithmetic operator> ::= * | / | + | - |

<comparison operator> ::=
    =  | <> | != | ~= | ^= | > | < | >= | <=
  | !> | ~> | ^> | !< | ~< | ^<

<logical operator> ::= NOT | AND | OR

For more details on operators, see Expressions.

2.6. Comments

Comments may be present in SQL scripts, SQL statements and PSQL modules. A comment can be any text specified by the code writer, usually used to document how particular parts of the code work. The parser ignores the text of comments.

Firebird supports two types of comments: block and in-line.

Syntax
<comment> ::= <block comment> | <single-line comment>

<block comment> ::=
  /* <character>[<character> …] */

<single-line comment> ::=
  -- <character>[<character> …]<end line>

Block comments start with the /* character pair and end with the */ character pair. Text in block comments may be of any length and can occupy multiple lines.

In-line comments start with a pair of hyphen characters, -- and continue up to the end of the current line.

Example
CREATE PROCEDURE P(APARAM INT)
  RETURNS (B INT)
AS
BEGIN
  /* This text will be ignored during the execution of the statement
     since it is a comment
  */
  B = A + 1; -- In-line comment
  SUSPEND;
END

3. Data Types and Subtypes

Data of various types are used to:

  • define columns in a database table in the CREATE TABLE statement or change columns using ALTER TABLE

  • declare or change a domain using the CREATE DOMAIN or ALTER DOMAIN statements

  • declare local variables in stored procedures, PSQL blocks and triggers and specify parameters in stored procedures

  • indirectly specify arguments and return values when declaring external functions (UDFs — user-defined functions)

  • provide arguments for the CAST() function when explicitly converting data from one type to another

Table 1. Overview of Data Types
Name Size Precision & Limits Description

BIGINT

64 bits

From -263 to (263 - 1)

The data type is available in Dialect 3 only

BLOB

Varying

The size of a BLOB segment is limited to 64K. The maximum size of a BLOB field is 4 GB

A data type of variable size for storing large amounts of data, such as images, text, digital sounds. The basic structural unit is a segment. The blob subtype defines its content

BOOLEAN

8 bits

false, true, unknown

Boolean data type

CHAR(n), CHARACTER(n)

n characters. Size in bytes depends on the encoding, the number of bytes in a character

from 1 to 32,767 bytes

A fixed-length character data type. When its data is displayed, trailing spaces are added to the string up to the specified length. Trailing spaces are not stored in the database but are restored to match the defined length when the column is displayed on the client side. Network traffic is reduced by not sending spaces over the LAN. If the number of characters is not specified, 1 is used by default.

DATE

32 bits

From 0001-01-01 AD to 9999-12-31 AD

ISC_DATE. Date only, no time element

DECIMAL (precision, scale)

Varying (16, 32 or 64 bits)

precision = from 1 to 18, defines the least possible number of digits to store; scale = from 0 to 18, defines the number of digits after the decimal point

A number with a decimal point that has scale digits after the point. scale must be less than or equal to precision. Example: DECIMAL(10,3) contains a number in exactly the following format: ppppppp.sss

DOUBLE PRECISION

64 bits

2.225 * 10-308 to 1.797 * 10308

Double-precision IEEE, ~15 digits, reliable size depends on the platform

FLOAT

32 bits

1.175 * 10-38 to 3.402 * 1038

Single-precision IEEE, ~7 digits

INTEGER, INT

32 bits

-2,147,483,648 up to 2,147,483,647

Signed long

NUMERIC (precision, scale)

Varying (16, 32 or 64 bits)

precision = from 1 to 18, defines the exact number of digits to store; scale = from 0 to 18, defines the number of digits after the decimal point

A number with a decimal point that has scale digits after the point. scale must be less than or equal to precision. Example: NUMERIC(10,3) contains a number in exactly the following format: ppppppp.sss

SMALLINT

16 bits

-32,768 to 32,767

Signed short (word)

TIME

32 bits

0:00 to 23:59:59.9999

ISC_TIME. Time of day. It cannot be used to store an interval of time

TIMESTAMP

64 bits (2 X 32 bits)

From start of day 0001-01-01 AD to end of day 9999-12-31 AD

Date and time of day

VARCHAR(n), CHAR VARYING, CHARACTER VARYING

n characters. Size in bytes depends on the encoding, the number of bytes in a character

from 1 to 32,765 bytes

Variable length string type. The total size of characters in bytes cannot be larger than (32KB-3), taking into account their encoding. The two trailing bytes store the declared length. There is no default size: the n argument is mandatory. Leading and trailing spaces are stored and they are not trimmed, except for those trailing characters that are past the declared length.

Note About Dates

Bear in mind that a time series consisting of dates in past centuries is processed without taking into account the actual historical facts, as though the Gregorian calendar were applicable throughout the entire series.

3.1. Integer Data Types

The SMALLINT, INTEGER and BIGINT data types are used for integers of various precision in Dialect 3. Firebird does not support an unsigned integer data type.

3.1.1. SMALLINT

The 16-bit SMALLINT data type is for compact data storage of integer data for which only a narrow range of possible values is required. Numbers of the SMALLINT type are within the range from -216 to 216 - 1, that is, from -32,768 to 32,767.

SMALLINT Examples
CREATE DOMAIN DFLAG AS SMALLINT DEFAULT 0 NOT NULL
  CHECK (VALUE=-1 OR VALUE=0 OR VALUE=1);

CREATE DOMAIN RGB_VALUE AS SMALLINT;

3.1.2. INTEGER

The INTEGER data type is a 32-bit integer. The shorthand name of the data type is INT. Numbers of the INTEGER type are within the range from -232 to 232 - 1, that is, from -2,147,483,648 to 2,147,483,647.

INTEGER Example
CREATE TABLE CUSTOMER (
  CUST_NO INTEGER NOT NULL,
  CUSTOMER VARCHAR(25) NOT NULL,
  CONTACT_FIRST VARCHAR(15),
  CONTACT_LAST VARCHAR(20),
  ...
    PRIMARY KEY (CUST_NO) )

3.1.3. BIGINT

BIGINT is an SQL:99-compliant 64-bit integer data type, available only in Dialect 3. If a client uses Dialect 1, the generator value sent by the server is reduced to a 32-bit integer (INTEGER). When Dialect 3 is used for connection, the generator value is of type BIGINT.

Numbers of the BIGINT type are within the range from -263 to 263 - 1, or from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.

3.1.4. Hexadecimal Format for Integer Numbers

Starting from Firebird 2.5, constants of the three integer types can be specified in hexadecimal format by means of 9 to 16 hexadecimal digits for BIGINT or 1 to 8 digits for INTEGER. Hex representation for writing to SMALLINT is not explicitly supported but Firebird will transparently convert a hex number to SMALLINT if necessary, provided it falls within the ranges of negative and positive SMALLINT.

The usage and numerical value ranges of hexadecimal notation are described in more detail in the discussion of number constants in the chapter entitled Common Language Elements.

Examples Using Integer Types
CREATE TABLE WHOLELOTTARECORDS (
  ID BIGINT NOT NULL PRIMARY KEY,
  DESCRIPTION VARCHAR(32)
);

INSERT INTO MYBIGINTS VALUES (
  -236453287458723,
  328832607832,
  22,
  -56786237632476,
  0X6F55A09D42,       -- 478177959234
  0X7FFFFFFFFFFFFFFF, -- 9223372036854775807
  0XFFFFFFFFFFFFFFFF, -- -1
  0X80000000,         -- -2147483648, an INTEGER
  0X080000000,        -- 2147483648, a BIGINT
  0XFFFFFFFF,         -- -1, an INTEGER
  0X0FFFFFFFF         -- 4294967295, a BIGINT
);

The hexadecimal INTEGERs in the above example are automatically cast to BIGINT before being inserted into the table. However, this happens after the numerical value is determined, so 0x80000000 (8 digits) and 0x080000000 (9 digits) will be saved as different BIGINT values.

3.2. Floating-Point Data Types

Floating point data types are stored in an IEEE 754 binary format that comprises sign, exponent and mantissa. Precision is dynamic, corresponding to the physical storage format of the value, which is exactly 4 bytes for the FLOAT type and 8 bytes for DOUBLE PRECISION.

Considering the peculiarities of storing floating-point numbers in a database, these data types are not recommended for storing monetary data. For the same reasons, columns with floating-point data are not recommended for use as keys or to have uniqueness constraints applied to them.

For testing data in columns with floating-point data types, expressions should check using a range, for instance, BETWEEN, rather than searching for exact matches.

When using these data types in expressions, extreme care is advised regarding the rounding of evaluation results.

3.2.1. FLOAT

The FLOAT data type has an approximate precision of 7 digits after the decimal point. To ensure the safety of storage, rely on 6 digits.

3.2.2. DOUBLE PRECISION

The DOUBLE PRECISION data type is stored with an approximate precision of 15 digits.

3.3. Fixed-Point Data Types

Fixed-point data types ensure the predictability of multiplication and division operations, making them the choice for storing monetary values. Firebird implements two fixed-point data types: NUMERIC and DECIMAL. According to the standard, both types limit the stored number to the declared scale (the number of digits after the decimal point).

Different treatments limit precision for each type: precision for NUMERIC columns is exactly “as declared”, while DECIMAL columns accepts numbers whose precision is at least equal to what was declared.

The behaviour of NUMERIC and DECIMAL in Firebird is like the SQL-standard DECIMAL; the precision is at least equal to what was declared.

For instance, NUMERIC(4, 2) defines a number consisting altogether of four digits, including two digits after the decimal point; that is, it can have up to two digits before the point and no more than two digits after the point. If the number 3.1415 is written to a column with this data type definition, the value of 3.14 will be saved in the NUMERIC(4, 2) column.

The form of declaration for fixed-point data, for instance, NUMERIC(p, s), is common to both types. It is important to realise that the s argument in this template is scale, rather than “a count of digits after the decimal point”. Understanding the mechanism for storing and retrieving fixed-point data should help to visualise why: for storage, the number is multiplied by 10s (10 to the power of s), converting it to an integer; when read, the integer is converted back.

The method of storing fixed-point data in the DBMS depends on several factors: declared precision, database dialect, declaration type.

Table 2. Method of Physical Storage for Real Numbers
Precision Data type Dialect 1 Dialect 3

1 - 4

NUMERIC

SMALLINT

SMALLINT

1 - 4

DECIMAL

INTEGER

INTEGER

5 - 9

NUMERIC or DECIMAL

INTEGER

INTEGER

10 - 18

NUMERIC or DECIMAL

DOUBLE PRECISION

BIGINT

3.3.1. NUMERIC

Data Declaration Format
  NUMERIC
| NUMERIC(precision)
| NUMERIC(precision, scale)
Table 3. NUMERIC Type Parameters
Parameter Description

precision

Precision, between 1 and 18. Defaults to 9.

scale

Scale, between 0 and scale. Defaults to 0.

Storage Examples

Further to the explanation above, the DBMS will store NUMERIC data according the declared precision and scale. Some more examples are:

NUMERIC(4) stored as      SMALLINT (exact data)
NUMERIC(4,2)              SMALLINT (data * 102)
NUMERIC(10,4) (Dialect 1) DOUBLE PRECISION
              (Dialect 3) BIGINT (data * 104)

Always keep in mind that the storage format depends on the precision. For instance, you define the column type as NUMERIC(2,2) presuming that its range of values will be -0.99…​0.99. However, the actual range of values for the column will be -327.68..327.67, which is due to storing the NUMERIC(2,2) data type in the SMALLINT format. In storage, the NUMERIC(4,2), NUMERIC(3,2) and NUMERIC(2,2) data types are the same, in fact. It means that if you really want to store data in a column with the NUMERIC(2,2) data type and limit the range to -0.99…​0.99, you will have to create a constraint for it.

3.3.2. DECIMAL

Data Declaration Format
  DECIMAL
| DECIMAL(precision)
| DECIMAL(precision, scale)
Table 4. DECIMAL Type Parameters
Parameter Description

precision

Precision, between 1 and 18. Defaults to 9.

scale

Scale, between 0 and scale. Defaults to 0.

Storage Examples

The storage format in the database for DECIMAL is very similar to NUMERIC, with some differences that are easier to observe with the help of some more examples:

DECIMAL(4) stored as      INTEGER (exact data)
DECIMAL(4,2)              INTEGER (data * 102)
DECIMAL(10,4) (Dialect 1) DOUBLE PRECISION
              (Dialect 3) BIGINT (data * 104)

3.4. Data Types for Dates and Times

The DATE, TIME and TIMESTAMP data types are used to work with data containing dates and times. Dialect 3 supports all the three types, while Dialect 1 has only DATE. The DATE type in Dialect 3 is “date-only”, whereas the Dialect 1 DATE type stores both date and time-of-day, equivalent to TIMESTAMP in Dialect 3. Dialect 1 has no “date-only” type.

Dialect 1 DATE data can be defined alternatively as TIMESTAMP and this is recommended for new definitions in Dialect 1 databases.

Fractions of Seconds

If fractions of seconds are stored in date and time data types, Firebird stores them to ten-thousandths of a second. If a lower granularity is preferred, the fraction can be specified explicitly as thousandths, hundredths or tenths of a second in Dialect 3 databases of ODS 11 or higher.

Some useful knowledge about subseconds precision:

The time-part of a TIME or TIMESTAMP is a 4-byte WORD, with room for decimilliseconds precision and time values are stored as the number of deci-milliseconds elapsed since midnight. The actual precision of values stored in or read from time(stamp) functions and variables is:

  • CURRENT_TIME defaults to seconds precision and can be specified up to milliseconds precision with CURRENT_TIME (0|1|2|3)

  • CURRENT_TIMESTAMP milliseconds precision. Precision from seconds to milliseconds can be specified with CURRENT_TIMESTAMP (0|1|2|3)

  • Literal 'NOW': milliseconds precision

  • Functions DATEADD() and DATEDIFF() support up to milliseconds precision. Deci-milliseconds can be specified but they are rounded to the nearest integer before any operation is performed

  • The EXTRACT() function returns up to deci-milliseconds precision with the SECOND and MILLISECOND arguments

  • For TIME and TIMESTAMP literals, Firebird happily accepts up to deci-milliseconds precision, but truncates (not rounds) the time part to the nearest lower or equal millisecond. Try, for example, SELECT TIME '14:37:54.1249' FROM rdb$database

  • the ‘+’ and ‘-’ operators work with deci-milliseconds precision, but only within the expression. As soon as something is stored or even just SELECTed from RDB$DATABASE, it reverts to milliseconds precision

Deci-milliseconds precision is rare and is not currently stored in columns or variables. The best assumption to make from all this is that, although Firebird stores TIME and the TIMESTAMP time-part values as the number of deci-milliseconds (10-4 seconds) elapsed since midnight, the actual precision could vary from seconds to milliseconds.

3.4.1. DATE

The DATE data type in Dialect 3 stores only date without time. The available range for storing data is from January 01, 1 to December 31, 9999.

Dialect 1 has no “date-only” type.

In Dialect 1, date literals without a time part, as well as 'TODAY', 'YESTERDAY' and 'TOMORROW' automatically get a zero time part.

If, for some reason, it is important to you to store a Dialect 1 timestamp literal with an explicit zero time-part, the engine will accept a literal like '2016-12-25 00:00:00.0000'. However, '2016-12-25' would have precisely the same effect, with fewer keystrokes!

3.4.2. TIME

The TIME data type is available in Dialect 3 only. It stores the time of day within the range from 00:00:00.0000 to 23:59:59.9999.

If you need to get the time-part from DATE in Dialect 1, you can use the EXTRACT function.

Examples Using EXTRACT()
EXTRACT (HOUR FROM DATE_FIELD)
EXTRACT (MINUTE FROM DATE_FIELD)
EXTRACT (SECOND FROM DATE_FIELD)

See also the EXTRACT() function in the chapter entitled Built-in Functions.

3.4.3. TIMESTAMP

The TIMESTAMP data type is available in Dialect 3 and Dialect 1. It comprises two 32-bit words — a date-part and a time-part — to form a structure that stores both date and time-of-day. It is the same as the DATE type in Dialect 1.

The EXTRACT function works equally well with TIMESTAMP as with the Dialect 1 DATE type.

3.4.4. Operations Using Date and Time Values

The method of storing date and time values makes it possible to involve them as operands in some arithmetic operations. In storage, a date value or date-part of a timestamp is represented as the number of days elapsed since “date zero” — November 17, 1898 — whilst a time value or the time-part of a timestamp is represented as the number of seconds (with fractions of seconds taken into account) since midnight.

An example is to subtract an earlier date, time or timestamp from a later one, resulting in an interval of time, in days and fractions of days.

Table 5. Arithmetic Operations for Date and Time Data Types
Operand 1 Operation Operand 2 Result

DATE

+

TIME

TIMESTAMP

DATE

+

Numeric value n

DATE increased by n whole days. Broken values are rounded (not floored) to the nearest integer

TIME

+

DATE

TIMESTAMP

TIME

+

Numeric value n

TIME increased by n seconds. The fractional part is taken into account

TIMESTAMP

+

Numeric value n

TIMESTAMP, where the date will advance by the number of days and part of a day represented by number n — so “+ 2.75” will push the date forward by 2 days and 18 hours

DATE

-

DATE

Number of days elapsed, within the range DECIMAL(9, 0)

DATE

-

Numeric value n

DATE reduced by n whole days. Broken values are rounded (not floored) to the nearest integer

TIME

-

TIME

Number of seconds elapsed, within the range DECIMAL(9, 4)

TIME

-

Numeric value n

TIME reduced by n seconds. The fractional part is taken into account

TIMESTAMP

-

TIMESTAMP

Number of days and part-day, within the range DECIMAL(18, 9)

TIMESTAMP

-

Numeric value n

TIMESTAMP where the date will decrease by the number of days and part of a day represented by number n — so “- 2.25” will decrease the date by 2 days and 6 hours

Notes

The DATE type is considered as TIMESTAMP in Dialect 1.

See also

DATEADD, DATEDIFF

3.5. Character Data Types

For working with character data, Firebird has the fixed-length CHAR and the variable-length VARCHAR data types. The maximum size of text data stored in these data types is 32,767 bytes for CHAR and 32,765 bytes for VARCHAR. The maximum number of characters that will fit within these limits depends on the CHARACTER SET being used for the data under consideration. The collation sequence does not affect this maximum, although it may affect the maximum size of any index that involves the column.

If no character set is explicitly specified when defining a character object, the default character set specified when the database was created will be used. If the database does not have a default character set defined, the field gets the character set NONE.

3.5.1. Unicode

Most current development tools support Unicode, implemented in Firebird with the character sets UTF8 and UNICODE_FSS. UTF8 comes with collations for many languages. UNICODE_FSS is more limited and is used mainly by Firebird internally for storing metadata. Keep in mind that one UTF8 character occupies up to 4 bytes, thus limiting the size of CHAR fields to 8,191 characters (32,767/4).

The actual “bytes per character” value depends on the range the character belongs to. Non-accented Latin letters occupy 1 byte, Cyrillic letters from the WIN1251 encoding occupy 2 bytes in UTF8, characters from other encodings may occupy up to 4 bytes.

The UTF8 character set implemented in Firebird supports the latest version of the Unicode standard, thus recommending its use for international databases.

3.5.2. Client Character Set

While working with strings, it is essential to keep the character set of the client connection in mind. If there is a mismatch between the character sets of the stored data and that of the client connection, the output results for string columns are automatically re-encoded, both when data are sent from the client to the server and when they are sent back from the server to the client. For example, if the database was created in the WIN1251 encoding but KOI8R or UTF8 is specified in the client’s connection parameters, the mismatch will be transparent.

3.5.3. Special Character Sets

Character set NONE

The character set NONE is a special character set in Firebird. It can be characterized such that each byte is a part of a string, but the string is stored in the system without any clues about what constitutes any character: character encoding, collation, case, etc. are simply unknown. It is the responsibility of the client application to deal with the data and provide the means to interpret the string of bytes in some way that is meaningful to the application and the human user.

Character set OCTETS

Data in OCTETS encoding are treated as bytes that may not actually be interpreted as characters. OCTETS provides a way to store binary data, which could be the results of some Firebird functions. The database engine has no concept of what it is meant to do with a string of bits in OCTETS, other than just store it and retrieve it. Again, the client side is responsible for validating the data, presenting them in formats that are meaningful to the application and its users and handling any exceptions arising from decoding and encoding them.

3.5.4. Collation Sequence

Each character set has a default collation sequence (COLLATE) that specifies the collation order. Usually, it provides nothing more than ordering based on the numeric code of the characters and a basic mapping of upper- and lower-case characters. If some behaviour is needed for strings that is not provided by the default collation sequence and a suitable alternative collation is supported for that character set, a COLLATE collation clause can be specified in the column definition.

A COLLATE collation clause can be applied in other contexts besides the column definition. For greater-than/less-than comparison operations, it can be added in the WHERE clause of a SELECT statement. If output needs to be sorted in a special alphabetic sequence, or case-insensitively, and the appropriate collation exists, then a COLLATE clause can be included with the ORDER BY clause when rows are being sorted on a character field and with the GROUP BY clause in case of grouping operations.

Case-Insensitive Searching

For a case-insensitive search, the UPPER function could be used to convert both the search argument and the searched strings to upper-case before attempting a match:

…
where upper(name) = upper(:flt_name)

For strings in a character set that has a case-insensitive collation available, you can simply apply the collation, to compare the search argument and the searched strings directly. For example, using the WIN1251 character set, the collation PXW_CYRL is case-insensitive for this purpose:

…
WHERE FIRST_NAME COLLATE PXW_CYRL >= :FLT_NAME
…
ORDER BY NAME COLLATE PXW_CYRL
See also

CONTAINING

UTF8 Collation Sequences

The following table shows the possible collation sequences for the UTF8 character set.

Table 6. Collation Sequences for Character Set UTF8
Collation Characteristics

UCS_BASIC

Collation works according to the position of the character in the table (binary). Added in Firebird 2.0

UNICODE

Collation works according to the UCA algorithm (Unicode Collation Algorithm) (alphabetical). Added in Firebird 2.0

UTF8

The default, binary collation, identical to UCS_BASIC, which was added for SQL compatibility

UNICODE_CI

Case-insensitive collation, works without taking character case into account. Added in Firebird 2.1

UNICODE_CI_AI

Case-insensitive, accent-insensitive collation, works alphabetically without taking character case or accents into account. Added in Firebird 2.5

Example

An example of collation for the UTF8 character set without taking into account the case or accentuation of characters (similar to COLLATE PXW_CYRL).

...
ORDER BY NAME COLLATE UNICODE_CI_AI

3.5.5. Character Indexes

In Firebird earlier than version 2.0, a problem can occur with building an index for character columns that use a non-standard collation sequence: the length of an indexed field is limited to 252 bytes with no COLLATE specified or 84 bytes if COLLATE is specified. Multi-byte character sets and compound indexes limit the size even further.

Starting from Firebird 2.0, the maximum length for an index equals one quarter of the page size, i.e. from 1,024 — for page size 4,096 — to 4,096 bytes — for page size 16,384. The maximum length of an indexed string is 9 bytes less than that quarter-page limit.

Calculating Maximum Length of an Indexed String Field

The following formula calculates the maximum length of an indexed string (in characters):

max_char_length = FLOOR((page_size / 4 - 9) / N)

where N is the number of bytes per character in the character set.

The table below shows the maximum length of an indexed string (in characters), according to page size and character set, calculated using this formula.

Table 7. Maximum Index Lengths by Page Size and Character Size

Page Size

Bytes per character

1

2

3

4

6

4,096

1,015

507

338

253

169

8,192

2,039

1,019

679

509

339

16,384

4,087

2,043

1,362

1,021

682

With case-insensitive collations (“_CI”), one character in the index will occupy not 4, but 6 (six) bytes, so the maximum key length for a page of — for example — 4,096 bytes, will be 169 characters.

3.5.6. Character Types in Detail

CHAR

CHAR is a fixed-length data type. If the entered number of characters is less than the declared length, trailing spaces will be added to the field. Generally, the pad character does not have to be a space: it depends on the character set. For example, the pad character for the OCTETS character set is zero.

The full name of this data type is CHARACTER, but there is no requirement to use full names and people rarely do so.

Fixed-length character data can be used to store codes whose length is standard and has a definite “width” in directories. An example of such a code is an EAN13 barcode — 13 characters, all filled.

Declaration Syntax
{CHAR | CHARACTER} [(length)]
  [CHARACTER SET <set>] [COLLATE <name>]

If no length is specified, it is taken to be 1.

A valid length is from 1 to the maximum number of characters that can be accommodated within 32,767 bytes.

Formally, the COLLATE clause is not part of the data type declaration, and its position depends on the syntax of the statement.

VARCHAR

VARCHAR is the basic string type for storing texts of variable length, up to a maximum of 32,765 bytes. The stored structure is equal to the actual size of the data plus 2 bytes where the length of the data is recorded.

All characters that are sent from the client application to the database are considered meaningful, including the leading and trailing spaces. However, trailing spaces are not stored: they will be restored upon retrieval, up to the recorded length of the string.

The full name of this type is CHARACTER VARYING. Another variant of the name is written as CHAR VARYING.

Syntax
{VARCHAR | {CHAR | CHARACTER} VARYING} (length)
  [CHARACTER SET <set>] [COLLATE <name>]

Formally, the COLLATE clause is not part of the data type declaration, and its position depends on the syntax of the statement.

NCHAR

NCHAR is a fixed-length character data type with the ISO8859_1 character set predefined. In all other respects it is the same as CHAR.

Syntax
{NCHAR | NATIONAL {CHAR | CHARACTER}} [(length)]

If no length is specified, it is taken to be 1.

A similar data type is available for the variable-length string type: NATIONAL {CHAR | CHARACTER} VARYING.

3.6. Boolean Data Type

Firebird 3.0 introduced a fully-fledged Boolean data type.

3.6.1. BOOLEAN

The SQL:2008 compliant BOOLEAN data type (8 bits) comprises the distinct truth values TRUE and FALSE. Unless prohibited by a NOT NULL constraint, the BOOLEAN data type also supports the truth value UNKNOWN as the null value. The specification does not make a distinction between the NULL value of this data type and the truth value UNKNOWN that is the result of an SQL predicate, search condition, or Boolean value expression: they may be used interchangeably to mean exactly the same thing.

As with many programming languages, the SQL BOOLEAN values can be tested with implicit truth values. For example, field1 OR field2 and NOT field1 are valid expressions.

The IS Operator

Predicates can use the operator Boolean IS [NOT] for matching. For example, field1 IS FALSE, or field1 IS NOT TRUE.

  • Equivalence operators (“=”, “!=”, “<>” and so on) are valid in all comparisons.

BOOLEAN Examples
  1. Inserting and selecting

    CREATE TABLE TBOOL (ID INT, BVAL BOOLEAN);
    COMMIT;
    
    INSERT INTO TBOOL VALUES (1, TRUE);
    INSERT INTO TBOOL VALUES (2, 2 = 4);
    INSERT INTO TBOOL VALUES (3, NULL = 1);
    COMMIT;
    
    SELECT * FROM TBOOL;
              ID    BVAL
    ============ =======
               1 <true>
               2 <false>
               3 <null>
  2. Test for TRUE value

    SELECT * FROM TBOOL WHERE BVAL;
              ID    BVAL
    ============ =======
               1 <true>
  3. Test for FALSE value

    SELECT * FROM TBOOL WHERE BVAL IS FALSE;
              ID    BVAL
    ============ =======
               2 <false>
  4. Test for UNKNOWN value

    SELECT * FROM TBOOL WHERE BVAL IS UNKNOWN;
              ID    BVAL
    ============ =======
               3 <null>
  5. Boolean values in SELECT list

    SELECT ID, BVAL, BVAL AND ID < 2
      FROM TBOOL;
              ID    BVAL
    ============ ======= =======
               1 <true>  <true>
               2 <false> <false>
               3 <null>  <false>
  6. PSQL declaration with start value

    DECLARE VARIABLE VAR1 BOOLEAN = TRUE;
  7. Valid syntax, but as with a comparison with NULL, will never return any record

SELECT * FROM TBOOL WHERE BVAL = UNKNOWN;
SELECT * FROM TBOOL WHERE BVAL <> UNKNOWN;
Use of Boolean against other data types

Although BOOLEAN is not inherently convertible to any other datatype, from version 3.0.1 the strings 'true' and 'false' (case-insensitive) will be implicitly cast to BOOLEAN in value expressions, e.g.

if (true > 'false') then ...

'false' is converted to BOOLEAN. Any attempt to use the Boolean operators AND, NOT, OR and IS will fail. NOT 'False', for example, is invalid.

A BOOLEAN can be explicitly converted to and from string with CAST. UNKNOWN is not available for any form of casting.

Other Notes
  • The type is represented in the API with the FB_BOOLEAN type and FB_TRUE and FB_FALSE constants.

  • The value TRUE is greater than the value FALSE.

3.7. Binary Data Types

BLOBs (Binary Large Objects) are complex structures used to store text and binary data of an undefined length, often very large.

Syntax
BLOB [SUB_TYPE <subtype>]
  [SEGMENT SIZE <segment size>]
  [CHARACTER SET <character set>]
  [COLLATE <collation name>]
Shortened syntax
BLOB (<segment size>)
BLOB (<segment size>, <subtype>)
BLOB (, <subtype>)

Formally, the COLLATE clause is not part of the data type declaration, and its position depends on the syntax of the statement.

Segment Size
Specifying the BLOB segment size is throwback to times past, when applications for working with BLOB data were written in C (Embedded SQL) with the help of the gpre pre-compiler. Nowadays, it is effectively irrelevant. The segment size for BLOB data is determined by the client side and is usually larger than the data page size, in any case.

3.7.1. BLOB Subtypes

The optional SUB_TYPE parameter specifies the nature of data written to the column. Firebird provides two pre-defined subtypes for storing user data:

Subtype 0: BINARY

If a subtype is not specified, the specification is assumed to be for untyped data and the default SUB_TYPE 0 is applied. The alias for subtype zero is BINARY. This is the subtype to specify when the data are any form of binary file or stream: images, audio, word-processor files, PDFs and so on.

Subtype 1: TEXT

Subtype 1 has an alias, TEXT, which can be used in declarations and definitions. For instance, BLOB SUB_TYPE TEXT. It is a specialized subtype used to store plain text data that is too large to fit into a string type. A CHARACTER SET may be specified, if the field is to store text with a different encoding to that specified for the database. From Firebird 2.0, a COLLATE clause is also supported.

Specifying a CHARACTER SET without SUB_TYPE implies SUB_TYPE TEXT.

Custom Subtypes

It is also possible to add custom data subtypes, for which the range of enumeration from -1 to -32,768 is reserved. Custom subtypes enumerated with positive numbers are not allowed, as the Firebird engine uses the numbers from 2-upward for some internal subtypes in metadata.

3.7.2. BLOB Specifics

Size

The maximum size of a BLOB field is limited to 4GB, regardless of whether the server is 32-bit or 64-bit. (The internal structures related to BLOBs maintain their own 4-byte counters.) For a page size of 4 KB (4096 bytes) the maximum size is lower — slightly less than 2GB.

Operations and Expressions

Text BLOBs of any length and any character set — including multi-byte — can be operands for practically any statement or internal functions. The following operators are supported completely:

=

(assignment)

=, <>, <, ⇐, >, >=

(comparison)

||

(concatenation)

BETWEEN,

IS [NOT] DISTINCT FROM,

IN,

ANY | SOME,

ALL

 

Partial support:

  • An error occurs with these if the search argument is larger than or equal to 32 KB:

    STARTING [WITH],

    LIKE,

    CONTAINING

     

  • Aggregation clauses work not on the contents of the field itself, but on the BLOB ID. Aside from that, there are some quirks:

    SELECT DISTINCT

    returns several NULL values by mistake if they are present

    ORDER BY

     — 

    GROUP BY

    concatenates the same strings if they are adjacent to each other, but does not do it if they are remote from each other

BLOB Storage
  • By default, a regular record is created for each BLOB and it is stored on a data page that is allocated for it. If the entire BLOB fits onto this page, it is called a level 0 BLOB. The number of this special record is stored in the table record and occupies 8 bytes.

  • If a BLOB does not fit onto one data page, its contents are put onto separate pages allocated exclusively to it (blob pages), while the numbers of these pages are stored into the BLOB record. This is a level 1 BLOB.

  • If the array of page numbers containing the BLOB data does not fit onto a data page, the array is put on separate blob pages, while the numbers of these pages are put into the BLOB record. This is a level 2 BLOB.

  • Levels higher than 2 are not supported.

3.7.3. ARRAY Type

The support of arrays in the Firebird DBMS is a departure from the traditional relational model. Supporting arrays in the DBMS could make it easier to solve some data-processing tasks involving large sets of similar data.

Arrays in Firebird are stored in BLOB of a specialized type. Arrays can be one-dimensional and multi-dimensional and of any data type except BLOB and ARRAY.

Example
CREATE TABLE SAMPLE_ARR (
  ID INTEGER NOT NULL PRIMARY KEY,
  ARR_INT INTEGER [4]
);

This example will create a table with a field of the array type consisting of four integers. The subscripts of this array are from 1 to 4.

Specifying Explicit Boundaries for Dimensions

By default, dimensions are 1-based — subscripts are numbered from 1. To specify explicit upper and lower bounds of the subscript values, use the following syntax:

'[' <lower>:<upper> ']'
Adding More Dimensions

A new dimension is added using a comma in the syntax. In this example we create a table with a two-dimensional array, with the lower bound of subscripts in both dimensions starting from zero:

CREATE TABLE SAMPLE_ARR2 (
  ID INTEGER NOT NULL PRIMARY KEY,
  ARR_INT INTEGER [0:3, 0:3]
);

The DBMS does not offer much in the way of language or tools for working with the contents of arrays. The database employee.fdb, found in the ../examples/empbuild directory of any Firebird distribution package, contains a sample stored procedure showing some simple work with arrays:

PSQL Source for SHOW_LANGS, a procedure involving an array
CREATE OR ALTER PROCEDURE SHOW_LANGS (
  CODE VARCHAR(5),
  GRADE SMALLINT,
  CTY VARCHAR(15))
RETURNS (LANGUAGES VARCHAR(15))
AS
  DECLARE VARIABLE I INTEGER;
BEGIN
  I = 1;
  WHILE (I <= 5) DO
  BEGIN
    SELECT LANGUAGE_REQ[:I]
    FROM JOB
    WHERE (JOB_CODE = :CODE)
      AND (JOB_GRADE = :GRADE)
      AND (JOB_COUNTRY = :CTY)
      AND (LANGUAGE_REQ IS NOT NULL))
    INTO :LANGUAGES;

    IF (LANGUAGES = '') THEN
    /* PRINTS 'NULL' INSTEAD OF BLANKS */
      LANGUAGES = 'NULL';
    I = I +1;
    SUSPEND;
  END
END

If the features described are enough for your tasks, you might consider using arrays in your projects. Currently, no improvements are planned to enhance support for arrays in Firebird.

3.8. Special Data Types

“Special” data types …​

3.8.1. SQL_NULL Data Type

The SQL_NULL type holds no data, but only a state: NULL or NOT NULL. It is not available as a data type for declaring table fields, PSQL variables or parameter descriptions. It was added to support the use of untyped parameters in expressions involving the IS NULL predicate.

An evaluation problem occurs when optional filters are used to write queries of the following type:

WHERE col1 = :param1 OR :param1 IS NULL

After processing, at the API level, the query will look like this:

WHERE col1 = ? OR ? IS NULL

This is a case where the developer writes an SQL query and considers :param1 as though it were a variable that he can refer to twice. However, at the API level, the query contains two separate and independent _parameters. The server cannot determine the type of the second parameter since it comes in association with IS NULL.

The SQL_NULL data type solves this problem. Whenever the engine encounters an “? IS NULL” predicate in a query, it assigns the SQL_NULL type to the parameter, which will indicate that parameter is only about “nullness” and the data type or the value need not be addressed.

The following example demonstrates its use in practice. It assumes two named parameters — say, :size and :colour — which might, for example, get values from on-screen text fields or drop-down lists. Each named parameter corresponds with two positional parameters in the query.

SELECT
  SH.SIZE, SH.COLOUR, SH.PRICE
FROM SHIRTS SH
WHERE (SH.SIZE = ? OR ? IS NULL)
  AND (SH.COLOUR = ? OR ? IS NULL)

Explaining what happens here assumes the reader is familiar with the Firebird API and the passing of parameters in XSQLVAR structures — what happens under the surface will not be of interest to those who are not writing drivers or applications that communicate using the “naked” API.

The application passes the parameterized query to the server in the usual positional ?-form. Pairs of “identical” parameters cannot be merged into one so, for two optional filters, for example, four positional parameters are needed: one for each ? in our example.

After the call to isc_dsql_describe_bind(), the SQLTYPE of the second and fourth parameters will be set to SQL_NULL. Firebird has no knowledge of their special relation with the first and third parameters: that responsibility lies entirely on the application side.

Once the values for size and colour have been set (or left unset) by the user and the query is about to be executed, each pair of XSQLVARs must be filled as follows:

User has supplied a value

First parameter (value compare): set *sqldata to the supplied value and *sqlind to 0 (for NOT NULL)

Second parameter (NULL test): set sqldata to null (null pointer, not SQL NULL) and *sqlind to 0 (for NOT NULL)

User has left the field blank

Both parameters: set sqldata to null (null pointer, not SQL NULL) and *sqlind to -1 (indicating NULL)

In other words: The value compare parameter is always set as usual. The SQL_NULL parameter is set the same, except that sqldata remains null at all times.

3.9. Conversion of Data Types

When composing an expression or specifying an operation, the aim should be to use compatible data types for the operands. When a need arises to use a mixture of data types, it should prompt you to look for a way to convert incompatible operands before subjecting them to the operation. The ability to convert data may well be an issue if you are working with Dialect 1 data.

3.9.1. Explicit Data Type Conversion

The CAST function enables explicit conversion between many pairs of data types.

Syntax
CAST (<expression> AS <target_type>)

<target_type> ::= <domain_or_non_array_type> | <array_datatype>

<domain_or_non_array_type> ::=
  !! See Scalar Data Types Syntax !!

<array_datatype> ::=
  !! See Array Data Types Syntax !!

See also CAST() in Chapter Built-in Scalar Functions.

Casting to a Domain

When you cast to a domain, any constraints declared for it are taken into account, i.e., NOT NULL or CHECK constraints. If the value does not pass the check, the cast will fail.

If TYPE OF is additionally specified — casting to its base type — any domain constraints are ignored during the cast. If TYPE OF is used with a character type (CHAR/VARCHAR), the character set and collation are retained.

Casting to TYPE OF COLUMN

When operands are cast to the type of a column, the specified column may be from a table or a view.

Only the type of the column itself is used. For character types, the cast includes the character set, but not the collation. The constraints and default values of the source column are not applied.

Example
CREATE TABLE TTT (
  S VARCHAR (40)
  CHARACTER SET UTF8 COLLATE UNICODE_CI_AI
);
COMMIT;

SELECT
  CAST ('I have many friends' AS TYPE OF COLUMN TTT.S)
FROM RDB$DATABASE;
Conversions Possible for the CAST Function
Table 8. Conversions with CAST
From Data Type To Data Type

Numeric types

Numeric types, [VAR]CHAR, BLOB

[VAR]CHAR

[VAR]CHAR, BLOB, Numeric types, DATE, TIME, TIMESTAMP, BOOLEAN

BLOB

[VAR]CHAR, BLOB, Numeric types, DATE, TIME, TIMESTAMP, BOOLEAN

DATE, TIME

[VAR]CHAR, BLOB, TIMESTAMP

TIMESTAMP

[VAR]CHAR, BLOB, DATE, TIME

BOOLEAN

BOOLEAN, [VAR]CHAR, BLOB

To convert string data types to the BOOLEAN type, the value must be (case-insensitive) 'true' or 'false', or NULL.

Keep in mind that partial information loss is possible. For instance, when you cast the TIMESTAMP data type to the DATE data type, the time-part is lost.

Literal Formats

To cast string data types to the DATE, TIME or TIMESTAMP data types, you need the string argument to be one of the predefined date and time literals (see Table 9) or a representation of the date in one of the allowed date-time literal formats:

<timestamp_format> ::=
    { [YYYY<p>]MM<p>DD[<p>HH[<p>mm[<p>SS[<p>NNNN]]]]
    | MM<p>DD[<p>YYYY[<p>HH[<p>mm[<p>SS[<p>NNNN]]]]]
    | DD<p>MM[<p>YYYY[<p>HH[<p>mm[<p>SS[<p>NNNN]]]]]
    | MM<p>DD[<p>YY[<p>HH[<p>mm[<p>SS[<p>NNNN]]]]]
    | DD<p>MM[<p>YY[<p>HH[<p>mm[<p>SS[<p>NNNN]]]]]
    | NOW
    | TODAY
    | TOMORROW
    | YESTERDAY }

<date_format> ::=
    { [YYYY<p>]MM<p>DD
    | MM<p>DD[<p>YYYY]
    | DD<p>MM[<p>YYYY]
    | MM<p>DD[<p>YY]
    | DD<p>MM[<p>YY]
    | TODAY
    | TOMORROW
    | YESTERDAY }

<time_format> :=
    { HH[<p>mm[<p>SS[<p>NNNN]]]
    | NOW }

<p> ::= whitespace | . | : | , | - | /
Table 9. Date and Time Literal Format Arguments
Argument Description

timestamp_format

Format of timestamp literal

date_literal

Format of date literal

time_literal

Format of time literal

YYYY

Four-digit year

YY

Two-digit year

MM

Month. It may contain 1 or 2 digits (1-12 or 01-12). You can also specify the three-letter shorthand name or the full name of a month in English. Case-insensitive

DD

Day. It may contain 1 or 2 digits (1-31 or 01-31)

HH

Hour. It may contain 1 or 2 digits (0-23 or 00-23)

mm

Minutes. It may contain 1 or 2 digits (0-59 or 00-59)

SS

Seconds. It may contain 1 or 2 digits (0-59 or 00-59)

NNNN

Ten-thousandths of a second. It may contain from 1 to 4 digits (0-9999)

p

A separator, any of permitted characters. Leading and trailing spaces are ignored

Table 10. Literals with Predefined Values of Date and Time

Literal

Description

Data Type

Dialect 1

Dialect 3

'NOW'

Current date and time

DATE

TIMESTAMP

'TODAY'

Current date

DATE with zero time

DATE

'TOMORROW'

Current date + 1 (day)

DATE with zero time

DATE

'YESTERDAY'

Current date - 1 (day)

DATE with zero time

DATE

Use of the complete specification of the year in the four-digit form — YYYY — is strongly recommended, to avoid confusion in date calculations and aggregations.

Sample Date Literal Interpretations
select
  cast('04.12.2014' as date) as d1, -- DD.MM.YYYY
  cast('04 12 2014' as date) as d2, -- MM DD YYYY
  cast('4-12-2014' as date) as d3,  -- MM-DD-YYYY
  cast('04/12/2014' as date) as d4, -- MM/DD/YYYY
  cast('04,12,2014' as date) as d5, -- MM,DD,YYYY
  cast('04.12.14' as date) as d6,   -- DD.MM.YY
  -- DD.MM with current year
  cast('04.12' as date) as d7,
  -- MM/DD with current year
  cast('04/12' as date) as d8,
  cast('2014/12/04' as date) as d9, -- YYYY/MM/DD
  cast('2014 12 04' as date) as d10, -- YYYY MM DD
  cast('2014.12.04' as date) as d11, -- YYYY.MM.DD
  cast('2014-12-04' as date) as d12, -- YYYY-MM-DD
  cast('4 Jan 2014' as date) as d13, -- DD MM YYYY
  cast('2014 Jan 4' as date) as dt14, -- YYYY MM DD
  cast('Jan 4, 2014' as date) as dt15, -- MM DD, YYYY
  cast('11:37' as time) as t1, -- HH:mm
  cast('11:37:12' as time) as t2, -- HH:mm:ss
  cast('11:31:12.1234' as time) as t3, -- HH:mm:ss.nnnn
  cast('11.37.12' as time) as t4, -- HH.mm.ss
  -- DD.MM.YYYY HH:mm
  cast('04.12.2014 11:37' as timestamp) as dt1,
  -- MM/DD/YYYY HH:mm:ss
  cast('04/12/2014 11:37:12' as timestamp) as dt2,
  -- DD.MM.YYYY HH:mm:ss.nnnn
  cast('04.12.2014 11:31:12.1234' as timestamp) as dt3,
  -- MM/DD/YYYY HH.mm.ss
  cast('04/12/2014 11.37.12' as timestamp) as dt4
from rdb$database
Shorthand Casts for Date and Time Data Types

Firebird allows the use of a shorthand “C-style” type syntax for casts from string to the types DATE, TIME and TIMESTAMP. The SQL standard calls these datetime literals.

Syntax
<data_type> 'date_literal_string'
Example
-- 1
  UPDATE PEOPLE
  SET AGECAT = 'SENIOR'
  WHERE BIRTHDATE < DATE '1-Jan-1943';
-- 2
  INSERT INTO APPOINTMENTS
  (EMPLOYEE_ID, CLIENT_ID, APP_DATE, APP_TIME)
  VALUES (973, 8804, DATE 'today' + 2, TIME '16:00');
-- 3
  NEW.LASTMOD = TIMESTAMP 'now';

These shorthand expressions are evaluated directly during parsing, as though the statement were already prepared for execution. Thus, even if the query is run several times, the value of, for instance, timestamp 'now' remains the same no matter how much time passes.

If you need the time to be evaluated at each execution, use the full CAST syntax. An example of using such an expression in a trigger:

NEW.CHANGE_DATE = CAST('now' AS TIMESTAMP);

Firebird 4 will no longer allow these implicit datetime values like 'now', 'today', etc in these shorthand casts. It is advisable to switch to using the full CAST expression for implicit values.

3.9.2. Implicit Data Type Conversion

Implicit data conversion is not possible in Dialect 3 — the CAST function is almost always required to avoid data type clashes.

In Dialect 1, in many expressions, one type is implicitly cast to another without the need to use the CAST function. For instance, the following statement in Dialect 1 is valid:

UPDATE ATABLE
  SET ADATE = '25.12.2016' + 1

and the date literal will be cast to the date type implicitly.

In Dialect 3, this statement will throw error 35544569, “Dynamic SQL Error: expression evaluation not supported, Strings cannot be added or subtracted in dialect 3” — a cast will be needed:

UPDATE ATABLE
  SET ADATE = CAST ('25.12.2016' AS DATE) + 1

or, with the short cast:

UPDATE ATABLE
  SET ADATE = DATE '25.12.2016' + 1

In Dialect 1, mixing integer data and numeric strings is usually possible because the parser will try to cast the string implicitly. For example,

2 + '1'

will be executed correctly.

In Dialect 3, an expression like this will raise an error, so you will need to write it as a CAST expression:

2 + CAST('1' AS SMALLINT)

The exception to the rule is during string concatenation.

Implicit Conversion During String Concatenation

When multiple data elements are being concatenated, all non-string data will undergo implicit conversion to string, if possible.

Example
SELECT 30||' days hath September, April, June and November' CONCAT$
  FROM RDB$DATABASE;

CONCAT$
------------------------------------------------
30 days hath September, April, June and November

3.10. Custom Data Types — Domains

In Firebird, the concept of a “user-defined data type” is implemented in the form of the domain. Creating a domain does not truly create a new data type, of course. A domain provides the means to encapsulate an existing data type with a set of attributes and make this “capsule” available for multiple usage across the whole database. If several tables need columns defined with identical or nearly identical attributes, a domain makes sense.

Domain usage is not limited to column definitions for tables and views. Domains can be used to declare input and output parameters and variables in PSQL code.

3.10.1. Domain Attributes

A domain definition contains required and optional attributes. The data type is a required attribute. Optional attributes include:

  • a default value

  • to allow or forbid NULL

  • CHECK constraints

  • character set (for character data types and text BLOB fields)

  • collation (for character data types)

Sample domain definition
CREATE DOMAIN BOOL3 AS SMALLINT
  CHECK (VALUE IS NULL OR VALUE IN (0, 1));
See also

Explicit Data Type Conversion for the description of differences in the data conversion mechanism when domains are specified for the TYPE OF and TYPE OF COLUMN modifiers.

3.10.2. Domain Override

While defining a column using a domain, it is possible to override some of the attributes inherited from the domain. Table 3.9 summarises the rules for domain override.

Table 11. Rules for Overriding Domain Attributes in Column Definition
Attribute Override? Comments

Data type

No

 

Default value

Yes

 

Text character set

Yes

It can be also used to restore the default database values for the column

Text collation sequence

Yes

 

CHECK constraints

Yes

To add new conditions to the check, you can use the corresponding CHECK clauses in the CREATE and ALTER statements at the table level.

NOT NULL

No

Often it is better to leave domain nullable in its definition and decide whether to make it NOT NULL when using the domain to define columns.

3.10.3. Creating and Administering Domains

A domain is created with the DDL statement CREATE DOMAIN.

Short Syntax
CREATE DOMAIN name [AS] <type>
  [DEFAULT {<const> | <literal> | NULL | <context_var>}]
  [NOT NULL] [CHECK (<condition>)]
  [COLLATE <collation>]
See also

CREATE DOMAIN in the Data Definition Language (DDL) section.

Altering a Domain

To change the attributes of a domain, use the DDL statement ALTER DOMAIN. With this statement you can:

  • rename the domain

  • change the data type

  • drop the current default value

  • set a new default value

  • drop the NOT NULL constraint

  • set the NOT NULL constraint

  • drop an existing CHECK constraint

  • add a new CHECK constraint

Short Syntax
ALTER DOMAIN name
  [{TO new_name}]
  [{SET DEFAULT { <literal> | NULL | <context_var> } |
    DROP DEFAULT}]
  [{SET | DROP} NOT NULL ]
  [{ADD [CONSTRAINT] CHECK (<dom_condition>) |
    DROP CONSTRAINT}]
  [{TYPE <datatype>}]
Example
ALTER DOMAIN STORE_GRP SET DEFAULT -1;

When changing a domain, its dependencies must be taken into account: whether there are table columns, any variables, input and/or output parameters with the type of this domain declared in the PSQL code. If you change domains in haste, without carefully checking them, your code may stop working!

When you convert data types in a domain, you must not perform any conversions that may result in data loss. Also, for example, if you convert VARCHAR to INTEGER, check carefully that all data using this domain can be successfully converted.

See also

ALTER DOMAIN in the Data Definition Language (DDL) section.

Deleting (Dropping) a Domain

The DDL statement DROP DOMAIN deletes a domain from the database, provided it is not in use by any other database objects.

Syntax
DROP DOMAIN name

Any user connected to the database can delete a domain.

Example
DROP DOMAIN Test_Domain
See also

DROP DOMAIN in the Data Definition Language (DDL) section.

3.11. Data Type Declaration Syntax

This section documents the syntax of declaring data types. Data type declaration most commonly occurs in DDL statements, but also in CAST and EXECUTE BLOCK.

The syntax documented below is referenced from other parts of this language reference.

3.11.1. Scalar Data Types Syntax

The scalar data types are simple data types that hold a single value. For reasons of organisation, the syntax of BLOB types are defined separately in BLOB Data Types Syntax.

Scalar Data Types Syntax
<domain_or_non_array_type> ::=
    <scalar_datatype>
  | <blob_datatype>
  | [TYPE OF] domain
  | TYPE OF COLUMN rel.col

<scalar_datatype> ::=
    SMALLINT | INT[EGER] | BIGINT
  | FLOAT | DOUBLE PRECISION
  | BOOLEAN
  | DATE | TIME | TIMESTAMP
  | {DECIMAL | NUMERIC} [(precision [, scale])]
  | {VARCHAR | {CHAR | CHARACTER} VARYING} (length)
    [CHARACTER SET charset]
  | {CHAR | CHARACTER} [(length)] [CHARACTER SET charset]
  | {NCHAR | NATIONAL {CHARACTER | CHAR}} VARYING (length)
  | {NCHAR | NATIONAL {CHARACTER | CHAR}} [(length)]
Table 12. Arguments for the Scalar Data Types Syntax
Argument Description

domain

Domain (only non-array domains)

rel

Name of a table or view

col

Name of a column in a table or view (only columns of a non-array type)

precision

Numeric precision in decimal digits. From 1 to 18

scale

Scale, or number of decimals. From 0 to 18. It must be less than or equal to precision

length

The maximum length of a string, in characters

charset

Character set

domain_or_non_array_type

Non-array types that can be used in PSQL code and casts

Use of Domains in Declarations

A domain name can be specified as the type of a PSQL parameter or local variable. The parameter or variable will inherit all domain attributes. If a default value is specified for the parameter or variable, it overrides the default value specified in the domain definition.

If the TYPE OF clause is added before the domain name, only the data type of the domain is used: any of the other attributes of the domain — NOT NULL constraint, CHECK constraints, default value — are neither checked nor used. However, if the domain is of a text type, its character set and collation sequence are always used.

Use of Column Type in Declarations

Input and output parameters or local variables can also be declared using the data type of columns in existing tables and views. The TYPE OF COLUMN clause is used for that, specifying relationname.columnname as its argument.

When TYPE OF COLUMN is used, the parameter or variable inherits only the data type and — for string types — the character set and collation sequence. The constraints and default value of the column are ignored.

3.11.2. BLOB Data Types Syntax

The BLOB data types hold binary, character or custom format data of unspecified size. For more information, see Binary Data Types.

BLOB Data Types Syntax
<blob_datatype> ::=
    BLOB [SUB_TYPE {subtype_num | subtype_name}]
    [SEGMENT SIZE seglen] [CHARACTER SET charset]
  | BLOB [(seglen [, subtype_num])]
Table 13. Arguments for the Blob Data Types Syntax
Argument Description

charset

Character set (ignored for sub-types other than TEXT/1)

subtype_num

BLOB subtype number

subtype_name

BLOB subtype mnemonic name; this can be TEXT, BINARY, or one of the (other) standard or custom names defined in RDB$TYPES for RDB$FIELD_NAME = 'RDB$FIELD_SUB_TYPE'.

seglen

Segment size, cannot be greater than 65,535, defaults to 80 when not specified. See also Segment Size

3.11.3. Array Data Types Syntax

The array data types hold multiple scalar values in a single or multi-dimensional array. For more information, see ARRAY Type

Array Data Types Syntax
<array_datatype> ::=
    {SMALLINT | INT[EGER] | BIGINT} <array_dim>
  | {FLOAT | DOUBLE PRECISION} <array_dim>
  | BOOLEAN <array_dim>
  | {DATE | TIME | TIMESTAMP} <array_dim>
  | {DECIMAL | NUMERIC} [(precision [, scale])] <array_dim>
  | {VARCHAR | {CHAR | CHARACTER} VARYING} (length)
    <array_dim> [CHARACTER SET charset]
  | {CHAR | CHARACTER} [(length)] <array_dim>
    [CHARACTER SET charset]
  | {NCHAR | NATIONAL {CHARACTER | CHAR}} VARYING (length)
    <array_dim>
  | {NCHAR | NATIONAL {CHARACTER | CHAR}}
    [(length)] <array_dim>

<array_dim> ::= '[' [m:]n [,[m:]n ...] ']'
Table 14. Arguments for the Array Data Types Syntax
Argument Description

array_dim

Array dimensions

precision

Numeric precision in decimal digits. From 1 to 18

scale

Scale, or number of decimals. From 0 to 18. It must be less than or equal to precision

length

The maximum length of a string, in characters; optional for fixed-width character types, defaults to 1

charset

Character set

m, n

Integer numbers defining the index range of an array dimension

4. Common Language Elements

This chapter covers the elements that are common throughout the implementation of the SQL language — the expressions that are used to extract and operate on conditions about data and the predicates that test the truth of those assertions.

4.1. Expressions

SQL expressions provide formal methods for evaluating, transforming and comparing values. SQL expressions may include table columns, variables, constants, literals, various statements and predicates and also other expressions. The complete list of possible tokens in expressions follows.

Description of Expression Elements
Column name

Identifier of a column from a specified table used in evaluations or as a search condition. A column of the array type cannot be an element in an expression except when used with the IS [NOT] NULL predicate.

Array element

An expression may contain a reference to an array member i.e., <array_name>[s], where s is the subscript of the member in the array <array_name>

Arithmetic operators

The +, -, *, / characters used to calculate values

Concatenation operator

The || (“double-pipe”) operator used to concatenate strings

Logical operators

The reserved words NOT, AND and OR, used to combine simple search conditions in order to create complex conditions

Comparison operators

The symbols =, <>, !=, ~=, ^=, <, <=, >, >=, !<, ~<, ^<, !>, ~> and ^>

Comparison predicates

LIKE, STARTING WITH, CONTAINING, SIMILAR TO, BETWEEN, IS [NOT] NULL, IS [NOT] {TRUE | FALSE | UNKNOWN} and IS [NOT] DISTINCT FROM

Existential predicates

Predicates used to check the existence of values in a set. The IN predicate can be used both with sets of comma-separated constants and with subqueries that return a single column. The EXISTS, SINGULAR, ALL, ANY and SOME predicates can be used only with subqueries.

Constant or Literal

Numbers, or string literals enclosed in apostrophes, Boolean values TRUE, FALSE and UNKOWN, NULL

Date/time literal

An expression, similar to a string literal enclosed in apostrophes, that can be interpreted as a date, time or timestamp value. Date literals can be predefined literals ('TODAY', 'NOW', etc.) or strings of characters and numerals, such as '25.12.2016 15:30:35', that can be resolved as date and/or time strings.

Context variable

An internally-defined context variable

Local variable

Declared local variable, input or output parameter of a PSQL module (stored procedure, trigger, unnamed PSQL block in DSQL)

Positional parameter

A member of in an ordered group of one or more unnamed parameters passed to a stored procedure or prepared query

Subquery

A SELECT statement enclosed in parentheses that returns a single (scalar) value or, when used in existential predicates, a set of values

Function identifier

The identifier of an internal or external function in a function expression

Type cast

An expression explicitly converting data of one data type to another using the CAST function ( CAST (<value> AS <datatype>) ). For date/time literals only, the shorthand syntax <datatype> <value> is also supported (DATE '2016-12-25').

Conditional expression

Expressions using CASE and related internal functions

Parentheses

Bracket pairs (…​) used to group expressions. Operations inside the parentheses are performed before operations outside them. When nested parentheses are used, the most deeply nested expressions are evaluated first and then the evaluations move outward through the levels of nesting.

COLLATE clause

Clause applied to CHAR and VARCHAR types to specify the character-set-specific collation sequence to use in string comparisons

NEXT VALUE FOR sequence

Expression for obtaining the next value of a specified generator (sequence). The internal GEN_ID() function does the same.

4.1.1. Literals (Constants)

A literal — or constant — is a value that is supplied directly in an SQL statement, not derived from an expression, a parameter, a column reference nor a variable. It can be a string or a number.

String Literals

A string literal is a series of characters enclosed between a pair of apostrophes (“single quotes”). The maximum length of a string literal is 32,765 for CHAR/VARCHAR, or 65,533 bytes for BLOB; the maximum character count will be determined by the number of bytes used to encode each character.

  • Double quotes are NOT VALID for quoting strings. The SQL standard reserves double quotes for a different purpose: quoting identifiers.

  • If a literal apostrophe is required within a string constant, it is “escaped” by prefixing it with another apostrophe. For example, 'Mother O''Reilly’s home-made hooch'.

  • Care should be taken with the string length if the value is to be written to a CHAR or VARCHAR column. The maximum length for a CHAR or VARCHAR` literal is 32,765 bytes.

The character set of a string constant is assumed to be the same as the character set of its destined storage.

String Literals in Hexadecimal Notation

From Firebird 2.5 forward, string literals can be entered in hexadecimal notation, so-called “binary strings”. Each pair of hex digits defines one byte in the string. Strings entered this way will have character set OCTETS by default, but the introducer syntax can be used to force a string to be interpreted as another character set.

Syntax
{x|X}'<hexstring>'

<hexstring>  ::=  an even number of <hexdigit>
<hexdigit>   ::=  one of 0..9, A..F, a..f
Examples
select x'4E657276656E' from rdb$database
-- returns 4E657276656E, a 6-byte 'binary' string

select _ascii x'4E657276656E' from rdb$database
-- returns 'Nerven' (same string, now interpreted as ASCII text)

select _iso8859_1 x'53E46765' from rdb$database
-- returns 'Säge' (4 chars, 4 bytes)

select _utf8 x'53C3A46765' from rdb$database
-- returns 'Säge' (4 chars, 5 bytes)
Notes

The client interface determines how binary strings are displayed to the user. The isql utility, for example, uses upper case letters A-F, while FlameRobin uses lower case letters. Other client programs may use other conventions, such as displaying spaces between the byte pairs: '4E 65 72 76 65 6E'.

The hexadecimal notation allows any byte value (including 00) to be inserted at any position in the string. However, if you want to coerce it to anything other than OCTETS, it is your responsibility to supply the bytes in a sequence that is valid for the target character set.

Alternative String Literals

Since Firebird 3.0, it is possible to use a character, or character pair, other than the doubled (escaped) apostrophe, to embed a quoted string inside another string. The keyword q or Q preceding a quoted string informs the parser that certain left-right pairs or pairs of identical characters within the string are the delimiters of the embedded string literal.

Syntax
<alternative string literal> ::=
    { q | Q } <quote> <start char> [<char> ...] <end char> <quote>
Rules

When <start char> is ‘(’, ‘{’, ‘[’ or ‘<’, <end char> is paired up with its respective “partner”, viz. ‘)’, ‘}’, ‘]’ and ‘>’. In other cases, <end char> is the same as <start char>.

Inside the string, i.e. <char> items, single (not escaped) quotes can be used. Each quote will be part of the result string.

Examples
select q'{abc{def}ghi}' from rdb$database;        -- result: abc{def}ghi
select q'!That's a string!' from rdb$database;    -- result: That's a string
Introducer Syntax for String Literals

If necessary, a string literal may be preceded by a character set name, itself prefixed with an underscore “_”. This is known as introducer syntax. Its purpose is to inform the engine about how to interpret and store the incoming string.

Example

INSERT INTO People
VALUES (_ISO8859_1 'Hans-Jörg Schäfer')
Number Literals

A number literal is any valid number in a supported notation:

  • In SQL, for numbers in the standard decimal notation, the decimal point is always represented by period character (‘.’, full-stop, dot); thousands are not separated. Inclusion of commas, blanks, etc. will cause errors.

  • Exponential notation is supported. For example, 0.0000234 can be expressed as 2.34e-5.

  • Hexadecimal notation is supported by Firebird 2.5 and higher versions — see below.

The format of the literal decides the type (<d> for a decimal digit, <h> for a hexadecimal digit):

Format Type

<d>[<d> …​]

INTEGER or BIGINT (depends on if value fits in the type)

0{x|X} <h><h>[<h><h> …​]

INTEGER for 1-8 <h><h> pairs or BIGINT for 9-16 pairs

<d>[<d> …​] "." [<d> …​]

NUMERIC(18, n) where n depends on the number of digits after the decimal point

<d>[<d> …​]["." [<d> …​]] E <d>[<d> …​]

DOUBLE PRECISION

Hexadecimal Notation for Numbers

From Firebird 2.5 forward, integer values can be entered in hexadecimal notation. Numbers with 1-8 hex digits will be interpreted as type INTEGER; numbers with 9-16 hex digits as type BIGINT.

Syntax
0{x|X}<hexdigits>

<hexdigits>  ::=  1-16 of <hexdigit>
<hexdigit>   ::=  one of 0..9, A..F, a..f
Examples
select 0x6FAA0D3 from rdb$database           -- returns 117088467
select 0x4F9 from rdb$database               -- returns 1273
select 0x6E44F9A8 from rdb$database          -- returns 1850014120
select 0x9E44F9A8 from rdb$database          -- returns -1639646808 (an INTEGER)
select 0x09E44F9A8 from rdb$database         -- returns 2655320488 (a BIGINT)
select 0x28ED678A4C987 from rdb$database     -- returns 720001751632263
select 0xFFFFFFFFFFFFFFFF from rdb$database  -- returns -1
Hexadecimal Value Ranges
  • Hex numbers in the range 0 .. 7FFF FFFF are positive INTEGERs with values between 0 .. 2147483647 decimal. To coerce a number to BIGINT, prepend enough zeroes to bring the total number of hex digits to nine or above. That changes the type but not the value.

  • Hex numbers between 8000 0000 .. FFFF FFFF require some attention:

    • When written with eight hex digits, as in 0x9E44F9A8, a value is interpreted as 32-bit INTEGER. Since the leftmost bit (sign bit) is set, it maps to the negative range -2147483648 .. -1 decimal.

    • With one or more zeroes prepended, as in 0x09E44F9A8, a value is interpreted as 64-bit BIGINT in the range 0000 0000 8000 0000 .. 0000 0000 FFFF FFFF. The sign bit is not set now, so they map to the positive range 2147483648 .. 4294967295 decimal.

    Thus, in this range — and only in this range — prepending a mathematically insignificant 0 results in a totally different value. This is something to be aware of.

  • Hex numbers between 1 0000 0000 .. 7FFF FFFF FFFF FFFF are all positive BIGINT.

  • Hex numbers between 8000 0000 0000 0000 .. FFFF FFFF FFFF FFFF are all negative BIGINT.

  • A SMALLINT cannot be written in hex, strictly speaking, since even 0x1 is evaluated as INTEGER. However, if you write a positive integer within the 16-bit range 0x0000 (decimal zero) to 0x7FFF (decimal 32767) it will be converted to SMALLINT transparently.

    It is possible to write to a negative SMALLINT in hex, using a 4-byte hex number within the range 0xFFFF8000 (decimal -32768) to 0xFFFFFFFF (decimal -1).

Boolean Literals

A Boolean literal is one of TRUE, FALSE or UNKNOWN.

4.1.2. SQL Operators

SQL operators comprise operators for comparing, calculating, evaluating and concatenating values.

Operator Precedence

SQL Operators are divided into four types. Each operator type has a precedence, a ranking that determines the order in which operators and the values obtained with their help are evaluated in an expression. The higher the precedence of the operator type is, the earlier it will be evaluated. Each operator has its own precedence within its type, that determines the order in which they are evaluated in an expression.

Operators with the same precedence are evaluated from left to right. To force a different evaluation order, operations can be grouped by means of parentheses.

Table 15. Operator Type Precedence
Operator Type Precedence Explanation

Concatenation

1

Strings are concatenated before any other operations take place

Arithmetic

2

Arithmetic operations are performed after strings are concatenated, but before comparison and logical operations

Comparison

3

Comparison operations take place after string concatenation and arithmetic operations, but before logical operations

Logical

4

Logical operators are executed after all other types of operators

Concatenation Operator

The concatenation operator, two pipe characters known as “double pipe” — ‘||’ — concatenates (connects together) two character strings to form a single string. Character strings can be constants or values obtained from columns or other expressions.

Example
SELECT LAST_NAME || ', ' || FIRST_NAME AS FULL_NAME
FROM EMPLOYEE
Arithmetic Operators
Table 16. Arithmetic Operator Precedence
Operator Purpose Precedence

+signed_number

Unary plus

1

-signed_number

Unary minus

1

*

Multiplication

2

/

Division

2

+

Addition

3

-

Subtraction

3

Example
UPDATE T
    SET A = 4 + 1/(B-C)*D

Where operators have the same precedence, they are evaluated in left-to-right sequence.

Comparison Operators
Table 17. Comparison Operator Precedence
Operator Purpose Precedence

IS

Checks that the expression on the left is (not) NULL or the Boolean value on the right

1

=

Is equal to, is identical to

2

<>, !=, ~=, ^=

Is not equal to

2

>

Is greater than

2

<

Is less than

2

>=

Is greater than or equal to

2

<=

Is less than or equal to

2

!>, ~>, ^>

Is not greater than

2

!<, ~<, ^<

Is not less than

2

This group also includes comparison predicates BETWEEN, LIKE, CONTAINING, SIMILAR TO and others.

Example
IF (SALARY > 1400) THEN
…
Logical Operators
Table 18. Logical Operator Precedence
Operator Purpose Precedence

NOT

Negation of a search condition

1

AND

Combines two or more predicates, each of which must be true for the entire predicate to be true

2

OR

Combines two or more predicates, of which at least one predicate must be true for the entire predicate to be true

3

Example
IF (A < B OR (A > C AND A > D) AND NOT (C = D)) THEN …
NEXT VALUE FOR
Available in

DSQL, PSQL

Syntax
NEXT VALUE FOR sequence-name

NEXT VALUE FOR returns the next value of a sequence. SEQUENCE is the SQL-standard term for what is historically called a generator in Firebird and its ancestor, InterBase. The NEXT VALUE FOR operator is equivalent to the legacy GEN_ID (…​, 1) function, and is the recommended syntax for retrieving the next sequence value.

Unlike GEN_ID (…​, 1), the NEXT VALUE FOR variant does not take any parameters and thus, provides no way to retrieve the current value of a sequence, nor to step the next value by more than 1. GEN_ID (…​, <step value>) is still needed for these tasks. A step value of 0 returns the current sequence value.

Example
NEW.CUST_ID = NEXT VALUE FOR CUSTSEQ;

4.1.3. Conditional Expressions

A conditional expression is one that returns different values according to how a certain condition is met. It is composed by applying a conditional function construct, of which Firebird supports several. This section describes only one conditional expression construct: CASE. All other conditional expressions apply internal functions derived from CASE and are described in Conditional Functions.

CASE
Available in

DSQL, PSQL

The CASE construct returns a single value from a number of possible values. Two syntactic variants are supported:

  • The simple CASE, comparable to a case construct in Pascal or a switch in C

  • The searched CASE, which works like a series of “if …​ else if …​ else if” clauses.

Simple CASE
Syntax
…
CASE <test-expr>
  WHEN <expr> THEN <result>
  [WHEN <expr> THEN <result> ...]
  [ELSE <defaultresult>]
END
…

When this variant is used, test-expr is compared to the first expr, second expr and so on, until a match is found, and the corresponding result is returned. If no match is found, defaultresult from the optional ELSE clause is returned. If there are no matches and no ELSE clause, NULL is returned.

The matching works identically to the “=” operator. That is, if test-expr is NULL, it does not match any expr, not even an expression that resolves to NULL.

The returned result does not have to be a literal value: it might be a field or variable name, compound expression or NULL literal.

Example
SELECT
  NAME,
  AGE,
  CASE UPPER(SEX)
    WHEN 'M' THEN 'Male'
    WHEN 'F' THEN 'Female'
    ELSE 'Unknown'
  END GENDER,
RELIGION
    FROM PEOPLE

A short form of the simple CASE construct is the DECODE function.

Searched CASE
Syntax
CASE
  WHEN <bool_expr> THEN <result>
  [WHEN <bool_expr> THEN <result> …]
  [ELSE <defaultresult>]
END

The bool_expr expression is one that gives a ternary logical result: TRUE, FALSE or NULL. The first expression to return TRUE determines the result. If no expressions return TRUE, defaultresult from the optional ELSE clause is returned as the result. If no expressions return TRUE and there is no ELSE clause, the result will be NULL.

As with the simple CASE construct, the result need not be a literal value: it might be a field or variable name, a compound expression, or be NULL.

Example
CANVOTE = CASE
  WHEN AGE >= 18 THEN 'Yes'
  WHEN AGE < 18 THEN 'No'
  ELSE 'Unsure'
END

4.1.4. NULL in Expressions

NULL is not a value in SQL, but a state indicating that the value of the element either is unknown or it does not exist. It is not a zero, nor a void, nor an “empty string”, and it does not act like any value.

When you use NULL in numeric, string or date/time expressions, the result will always be NULL. When you use NULL in logical (Boolean) expressions, the result will depend on the type of the operation and on other participating values. When you compare a value to NULL, the result will be unknown.

NULL means NULL but, in Firebird, the logical result unknown is also represented by NULL.

Expressions Returning NULL

Expressions in this list will always return NULL:

1 + 2 + 3 + NULL
'Home ' || 'sweet ' || NULL
MyField = NULL
MyField <> NULL
NULL = NULL
not (NULL)

If it seems difficult to understand why, remember that NULL is a state that stands for “unknown”.

NULL in Logical Expressions

It has already been shown that NOT (NULL) results in NULL. The interaction is a bit more complicated for the logical AND and logical OR operators:

NULL or false  → NULL
NULL or true   → true
NULL or NULL   → NULL
NULL and false → false
NULL and true  → NULL
NULL and NULL  → NULL

As a basic rule-of-thumb, if applying TRUE instead of NULL produces a different result than applying FALSE, then the outcome of the original expression is unknown, or NULL.

Examples
(1 = NULL) or (1 <> 1)    -- returns NULL
(1 = NULL) or FALSE       -- returns NULL
(1 = NULL) or (1 = 1)     -- returns TRUE
(1 = NULL) or TRUE        -- returns TRUE
(1 = NULL) or (1 = NULL)  -- returns NULL
(1 = NULL) or UNKNOWN     -- returns NULL
(1 = NULL) and (1 <> 1)   -- returns FALSE
(1 = NULL) and FALSE      -- returns FALSE
(1 = NULL) and (1 = 1)    -- returns NULL
(1 = NULL) and TRUE       -- returns NULL
(1 = NULL) and (1 = NULL) -- returns NULL
(1 = NULL) and UNKNOWN    -- returns NULL

4.1.5. Subqueries

A subquery is a special form of expression that is actually a query embedded within another query. Subqueries are written in the same way as regular SELECT queries, but they must be enclosed in parentheses. Subquery expressions can be used in the following ways:

  • To specify an output column in the SELECT list

  • To obtain values or conditions for search predicates (the WHERE, HAVING clauses).

  • To produce a set that the enclosing query can select from, as though were a regular table or view. Subqueries like this appear in the FROM clause (derived tables) or in a Common Table Expression (CTE)

Correlated Subqueries

A subquery can be correlated. A query is correlated when the subquery and the main query are interdependent. To process each record in the subquery, it is necessary to fetch a record in the main query; i.e. the subquery fully depends on the main query.

Sample Correlated Subquery
SELECT *
FROM Customers C
WHERE EXISTS
  (SELECT *
   FROM Orders O
   WHERE C.cnum = O.cnum
     AND O.adate = DATE '10.03.1990');

When subqueries are used to get the values of the output column in the SELECT list, a subquery must return a scalar result (see below).

Scalar Results

Subqueries used in search predicates, other than existential and quantified predicates, must return a scalar result; that is, not more than one column from not more than one matching row or aggregation. If the result would return more, a run-time error will occur (“Multiple rows in a singleton select…​”).

Although it is reporting a genuine error, the message can be slightly misleading. A “singleton SELECT” is a query that must not be capable of returning more than one row. However, “singleton” and “scalar” are not synonymous: not all singleton SELECTS are required to be scalar; and single-column selects can return multiple rows for existential and quantified predicates.

Subquery Examples
  1. A subquery as the output column in a SELECT list:

    SELECT
      e.first_name,
      e.last_name,
      (SELECT
           sh.new_salary
       FROM
           salary_history sh
       WHERE
           sh.emp_no = e.emp_no
       ORDER BY sh.change_date DESC ROWS 1) AS last_salary
    FROM
      employee e
  2. A subquery in the WHERE clause for obtaining the employee’s maximum salary and filtering by it:

    SELECT
      e.first_name,
      e.last_name,
      e.salary
    FROM employee e
    WHERE
      e.salary = (
        SELECT MAX(ie.salary)
        FROM employee ie
      )

4.2. Predicates

A predicate is a simple expression asserting some fact, let’s call it P. If P resolves as TRUE, it succeeds. If it resolves to FALSE or NULL (UNKNOWN), it fails. A trap lies here, though: suppose the predicate, P, returns FALSE. In this case NOT(P) will return TRUE. On the other hand, if P returns NULL (unknown), then NOT(P) returns NULL as well.

In SQL, predicates can appear in CHECK constraints, WHERE and HAVING clauses, CASE expressions, the IIF() function and in the ON condition of JOIN clauses, and — since Firebird 3.0 — anywhere a normal expression can occur.

4.2.1. Conditions

A condition  — or Boolean expression — is a statement about the data that, like a predicate, can resolve to TRUE, FALSE or NULL. Conditions consist of one or more predicates, possibly negated using NOT and connected by AND and OR operators. Parentheses may be used for grouping predicates and controlling evaluation order.

A predicate may embed other predicates. Evaluation sequence is in the outward direction, i.e., the innermost predicates are evaluated first. Each “level” is evaluated in precedence order until the truth value of the ultimate condition is resolved.

4.2.2. Comparison Predicates

A comparison predicate consists of two expressions connected with a comparison operator. There are six traditional comparison operators:

=, >, <, >=, <=, <>

For the complete list of comparison operators with their variant forms, see Comparison Operators.

If one of the sides (left or right) of a comparison predicate has NULL in it, the value of the predicate will be UNKNOWN.

Examples
  1. Retrieve information about computers with the CPU frequency not less than 500 MHz and the price lower than $800:

    SELECT *
    FROM Pc
    WHERE speed >= 500 AND price < 800;
  2. Retrieve information about all dot matrix printers that cost less than $300:

    SELECT *
    FROM Printer
    WHERE ptrtype = 'matrix' AND price < 300;
  3. The following query will return no data, even if there are printers with no type specified for them, because a predicate that compares NULL with NULL returns NULL:

    SELECT *
    FROM Printer
    WHERE ptrtype = NULL AND price < 300;

    On the other hand, ptrtype can be tested for NULL and return a result: it is just that it is not a comparison test:

    SELECT *
    FROM Printer
    WHERE ptrtype IS NULL AND price < 300;

     — see IS [NOT] NULL.

Note about String Comparison

When CHAR and VARCHAR fields are compared for equality, trailing spaces are ignored in all cases.

Other Comparison Predicates

Other comparison predicates are marked by keyword symbols.

BETWEEN
Available in

DSQL, PSQL, ESQL

Syntax
<value> [NOT] BETWEEN <value_1> AND <value_2>

The BETWEEN predicate tests whether a value falls within a specified range of two values. (NOT BETWEEN tests whether the value does not fall within that range.)

The operands for BETWEEN predicate are two arguments of compatible data types. Unlike in some other DBMS, the BETWEEN predicate in Firebird is not symmetrical — if the lower value is not the first argument, the BETWEEN predicate will always return FALSE. The search is inclusive (the values represented by both arguments are included in the search). In other words, the BETWEEN predicate could be rewritten:

<value> >= <value_1> AND <value> <= <value_2>

When BETWEEN is used in the search conditions of DML queries, the Firebird optimizer can use an index on the searched column, if it is available.

Example
SELECT *
FROM EMPLOYEE
WHERE HIRE_DATE BETWEEN date '1992-01-01' AND CURRENT_DATE
LIKE
Available in

DSQL, PSQL, ESQL

Syntax
<match_value> [NOT] LIKE <pattern>
   [ESCAPE <escape character>]

<match_value>      ::= character-type expression
<pattern>          ::= search pattern
<escape character> ::= escape character

The LIKE predicate compares the character-type expression with the pattern defined in the second expression. Case- or accent-sensitivity for the comparison is determined by the collation that is in use. A collation can be specified for either operand, if required.

Wildcards

Two wildcard symbols are available for use in the search pattern:

  • the percentage symbol (%) will match any sequence of zero or more characters in the tested value

  • the underscore character (_) will match any single character in the tested value

If the tested value matches the pattern, taking into account wildcard symbols, the predicate is TRUE.

Using the ESCAPE Character Option

If the search string contains either of the wildcard symbols, the ESCAPE clause can be used to specify an escape character. The escape character must precede the ‘%’ or ‘_’} symbol in the search string, to indicate that the symbol is to be interpreted as a literal character.

Examples using LIKE
  1. Find the numbers of departments whose names start with the word “Software”:

    SELECT DEPT_NO
    FROM DEPT
    WHERE DEPT_NAME LIKE 'Software%';

    It is possible to use an index on the DEPT_NAME field if it exists.

    About LIKE and the Optimizer

    Actually, the LIKE predicate does not use an index. However, if the predicate takes the form of LIKE 'string%', it will be converted to the STARTING WITH predicate, which will use an index. This optimization only works for literal patterns, not for parameters.

    So, if you need to search for the beginning of a string, it is recommended to use the STARTING WITH predicate instead of the LIKE predicate.

  2. Search for employees whose names consist of 5 letters, start with the letters “Sm” and end with “th”. The predicate will be true for such names as “Smith” and “Smyth”.

    SELECT
      first_name
    FROM
      employee
    WHERE first_name LIKE 'Sm_th'
  3. Search for all clients whose address contains the string “Rostov”:

    SELECT *
    FROM CUSTOMER
    WHERE ADDRESS LIKE '%Rostov%'

    If you need to do a case-insensitive search for something enclosed inside a string (LIKE '%Abc%'), use of the CONTAINING predicate is recommended, in preference to the LIKE predicate.

  4. Search for tables containing the underscore character in their names. The ‘#’ character is used as the escape character:

    SELECT
      RDB$RELATION_NAME
    FROM RDB$RELATIONS
    WHERE RDB$RELATION_NAME LIKE '%#_%' ESCAPE '#'
STARTING WITH
Available in

DSQL, PSQL, ESQL

Syntax
<value> [NOT] STARTING WITH <value>

The STARTING WITH predicate searches for a string or a string-like type that starts with the characters in its value argument. The case- and accent-sensitivity of STARTING WITH depends on the collation of the first value.

When STARTING WITH is used in the search conditions of DML queries, the Firebird optimizer can use an index on the searched column, if it exists.

Example

Search for employees whose last names start with “Jo”:

SELECT LAST_NAME, FIRST_NAME
FROM EMPLOYEE
WHERE LAST_NAME STARTING WITH 'Jo'
See also

LIKE

CONTAINING
Available in

DSQL, PSQL, ESQL

Syntax
<value> [NOT] CONTAINING <value>

The CONTAINING predicate searches for a string or a string-like type looking for the sequence of characters that matches its argument. It can be used for an alphanumeric (string-like) search on numbers and dates. A CONTAINING search is not case-sensitive. However, if an accent-sensitive collation is in use then the search will be accent-sensitive.

Examples
  1. Search for projects whose names contain the substring “Map”:

    SELECT *
    FROM PROJECT
    WHERE PROJ_NAME CONTAINING 'Map';

    Two rows with the names “AutoMap” and “MapBrowser port” are returned.

  2. Search for changes in salaries with the date containing number 84 (in this case, it means changes that took place in 1984):

    SELECT *
    FROM SALARY_HISTORY
    WHERE CHANGE_DATE CONTAINING 84;
See also

LIKE

SIMILAR TO
Available in

DSQL, PSQL

Syntax
string-expression [NOT] SIMILAR TO <pattern> [ESCAPE <escape-char>]

<pattern>     ::= an SQL regular expression
<escape-char> ::= a single character

SIMILAR TO matches a string against an SQL regular expression pattern. Unlike in some other languages, the pattern must match the entire string in order to succeed — matching a substring is not enough. If any operand is NULL, the result is NULL. Otherwise, the result is TRUE or FALSE.

Syntax: SQL Regular Expressions

The following syntax defines the SQL regular expression format. It is a complete and correct top-down definition. It is also highly formal, rather long and probably perfectly fit to discourage everybody who hasn’t already some experience with regular expressions (or with highly formal, rather long top-down definitions). Feel free to skip it and read the next section, Building Regular Expressions, which uses a bottom-up approach, aimed at the rest of us.

<regular expression> ::= <regular term> ['|' <regular term> ...]

<regular term> ::= <regular factor> ...

<regular factor> ::= <regular primary> [<quantifier>]

<quantifier> ::= ? | * | + | '{' <m> [,[<n>]] '}'

<m>, <n> ::= unsigned int, with <m> <= <n> if both present

<regular primary> ::=
    <character> | <character class> | %
  | (<regular expression>)

<character> ::= <escaped character> | <non-escaped character>

<escaped character> ::=
  <escape-char> <special character> | <escape-char> <escape-char>

<special character> ::= any of the characters []()|^-+*%_?{}

<non-escaped character> ::=
  any character that is not a <special character>
  and not equal to <escape-char> (if defined)

<character class> ::=
    '_' | '[' <member> ... ']' | '[^' <non-member> ... ']'
  | '[' <member> ... '^' <non-member> ... ']'

<member>, <non-member> ::= <character> | <range> | <predefined class>

<range> ::= <character>-<character>

<predefined class> ::= '[:' <predefined class name> ':]'

<predefined class name> ::=
  ALPHA | UPPER | LOWER | DIGIT | ALNUM | SPACE | WHITESPACE
Building Regular Expressions

In this section are the elements and rules for building SQL regular expressions.

Characters

Within regular expressions, most characters represent themselves. The only exceptions are the special characters below:

[ ] ( ) | ^ - + * % _ ? { }

... and the escape character, if it is defined.

A regular expression that contains no special character or escape characters matches only strings that are identical to itself (subject to the collation in use). That is, it functions just like the ‘=’ operator:

'Apple' similar to 'Apple'  -- true
'Apples' similar to 'Apple' -- false
'Apple' similar to 'Apples' -- false
'APPLE' similar to 'Apple'  -- depends on collation
Wildcards

The known SQL wildcards ‘_’ and ‘%’ match any single character and a string of any length, respectively:

'Birne' similar to 'B_rne'   -- true
'Birne' similar to 'B_ne'    -- false
'Birne' similar to 'B%ne'    -- true
'Birne' similar to 'Bir%ne%' -- true
'Birne' similar to 'Birr%ne' -- false

Notice how ‘%’ also matches the empty string.

Character Classes

A bunch of characters enclosed in brackets define a character class. A character in the string matches a class in the pattern if the character is a member of the class:

'Citroen' similar to 'Cit[arju]oen'     -- true
'Citroen' similar to 'Ci[tr]oen'        -- false
'Citroen' similar to 'Ci[tr][tr]oen'    -- true

As can be seen from the second line, the class only matches a single character, not a sequence.

Within a class definition, two characters connected by a hyphen define a range. A range comprises the two endpoints and all the characters that lie between them in the active collation. Ranges can be placed anywhere in the class definition without special delimiters to keep them apart from the other elements.

'Datte' similar to 'Dat[q-u]e'          -- true
'Datte' similar to 'Dat[abq-uy]e'       -- true
'Datte' similar to 'Dat[bcg-km-pwz]e'   -- false
Predefined Character Classes

The following predefined character classes can also be used in a class definition:

[:ALPHA:]

Latin letters a..z and A..Z. With an accent-insensitive collation, this class also matches accented forms of these characters.

[:DIGIT:]

Decimal digits 0..9.

[:ALNUM:]

Union of [:ALPHA:] and [:DIGIT:].

[:UPPER:]

Uppercase Latin letters A..Z. Also matches lowercase with case-insensitive collation and accented forms with accent-insensitive collation.

[:LOWER:]

Lowercase Latin letters a..z. Also matches uppercase with case-insensitive collation and accented forms with accent-insensitive collation.

[:SPACE:]

Matches the space character (ASCII 32).

[:WHITESPACE:]

Matches horizontal tab (ASCII 9), linefeed (ASCII 10), vertical tab (ASCII 11), formfeed (ASCII 12), carriage return (ASCII 13) and space (ASCII 32).

Including a predefined class has the same effect as including all its members. Predefined classes are only allowed within class definitions. If you need to match against a predefined class and nothing more, place an extra pair of brackets around it.

'Erdbeere' similar to 'Erd[[:ALNUM:]]eere'     -- true
'Erdbeere' similar to 'Erd[[:DIGIT:]]eere'     -- false
'Erdbeere' similar to 'Erd[a[:SPACE:]b]eere'   -- true
'Erdbeere' similar to [[:ALPHA:]]              -- false
'E'        similar to [[:ALPHA:]]              -- true

If a class definition starts with a caret, everything that follows is excluded from the class. All other characters match:

'Framboise' similar to 'Fra[^ck-p]boise'       -- false
'Framboise' similar to 'Fr[^a][^a]boise'       -- false
'Framboise' similar to 'Fra[^[:DIGIT:]]boise'  -- true

If the caret is not placed at the start of the sequence, the class contains everything before the caret, except for the elements that also occur after the caret:

'Grapefruit' similar to 'Grap[a-m^f-i]fruit'   -- true
'Grapefruit' similar to 'Grap[abc^xyz]fruit'   -- false
'Grapefruit' similar to 'Grap[abc^de]fruit'    -- false
'Grapefruit' similar to 'Grap[abe^de]fruit'    -- false

'3' similar to '[[:DIGIT:]^4-8]'               -- true
'6' similar to '[[:DIGIT:]^4-8]'               -- false

Lastly, the already mentioned wildcard ‘_’ is a character class of its own, matching any single character.

Quantifiers

A question mark (‘?’) immediately following a character or class indicates that the preceding item may occur 0 or 1 times in order to match:

'Hallon' similar to 'Hal?on'                   -- false
'Hallon' similar to 'Hal?lon'                  -- true
'Hallon' similar to 'Halll?on'                 -- true
'Hallon' similar to 'Hallll?on'                -- false
'Hallon' similar to 'Halx?lon'                 -- true
'Hallon' similar to 'H[a-c]?llon[x-z]?'        -- true

An asterisk (‘*’) immediately following a character or class indicates that the preceding item may occur 0 or more times in order to match:

'Icaque' similar to 'Ica*que'                  -- true
'Icaque' similar to 'Icar*que'                 -- true
'Icaque' similar to 'I[a-c]*que'               -- true
'Icaque' similar to '_*'                       -- true
'Icaque' similar to '[[:ALPHA:]]*'             -- true
'Icaque' similar to 'Ica[xyz]*e'               -- false

A plus sign (‘+’) immediately following a character or class indicates that the preceding item must occur 1 or more times in order to match:

'Jujube' similar to 'Ju_+'                     -- true
'Jujube' similar to 'Ju+jube'                  -- true
'Jujube' similar to 'Jujuber+'                 -- false
'Jujube' similar to 'J[jux]+be'                -- true
'Jujube' sililar to 'J[[:DIGIT:]]+ujube'       -- false

If a character or class is followed by a number enclosed in braces (‘{’ and ‘}’), it must be repeated exactly that number of times in order to match:

'Kiwi' similar to 'Ki{2}wi'                    -- false
'Kiwi' similar to 'K[ipw]{2}i'                 -- true
'Kiwi' similar to 'K[ipw]{2}'                  -- false
'Kiwi' similar to 'K[ipw]{3}'                  -- true

If the number is followed by a comma (‘,’), the item must be repeated at least that number of times in order to match:

'Limone' similar to 'Li{2,}mone'               -- false
'Limone' similar to 'Li{1,}mone'               -- true
'Limone' similar to 'Li[nezom]{2,}'            -- true

If the braces contain two numbers separated by a comma, the second number not smaller than the first, then the item must be repeated at least the first number and at most the second number of times in order to match:

'Mandarijn' similar to 'M[a-p]{2,5}rijn'       -- true
'Mandarijn' similar to 'M[a-p]{2,3}rijn'       -- false
'Mandarijn' similar to 'M[a-p]{2,3}arijn'      -- true

The quantifiers ‘?’, ‘*’ and ‘+’ are shorthand for {0,1}, {0,} and {1,}, respectively.

OR-ing Terms

Regular expression terms can be OR’ed with the ‘|’ operator. A match is made when the argument string matches at least one of the terms:

'Nektarin' similar to 'Nek|tarin'              -- false
'Nektarin' similar to 'Nektarin|Persika'       -- true
'Nektarin' similar to 'M_+|N_+|P_+'            -- true
Subexpressions

One or more parts of the regular expression can be grouped into subexpressions (also called subpatterns) by placing them between parentheses (‘(’ and ‘)’). A subexpression is a regular expression in its own right. It can contain all the elements allowed in a regular expression, and can also have quantifiers added to it.

'Orange' similar to 'O(ra|ri|ro)nge'           -- true
'Orange' similar to 'O(r[a-e])+nge'            -- true
'Orange' similar to 'O(ra){2,4}nge'            -- false
'Orange' similar to 'O(r(an|in)g|rong)?e'      -- true
Escaping Special Characters

In order to match against a character that is special in regular expressions, that character has to be escaped. There is no default escape character; rather, the user specifies one when needed:

'Peer (Poire)' similar to 'P[^ ]+ \(P[^ ]+\)' escape '\'    -- true
'Pera [Pear]'  similar to 'P[^ ]+ #[P[^ ]+#]' escape '#'    -- true
'Päron-äppledryck' similar to 'P%$-ä%' escape '$'           -- true
'Pärondryck' similar to 'P%--ä%' escape '-'                 -- false

The last line demonstrates that the escape character can also escape itself, if needed.

IS [NOT] DISTINCT FROM
Available in

DSQL, PSQL

Syntax
<operand1> IS [NOT] DISTINCT FROM <operand2>

Two operands are considered DISTINCT (different) if they have a different value or if one of them is NULL and the other non-null. They are considered NOT DISTINCT (equal) if they have the same value or if both of them are NULL.

IS [NOT] DISTINCT FROM always returns TRUE or FALSE and never UNKNOWN (NULL) (unknown value). Operators ‘=’ and ‘<>’, conversely, will return UNKNOWN (NULL) if one or both operands are NULL.

Table 19. Results of Various Comparison Predicates

Operand values

Result of various predicates

=

IS NOT DISTINCT FROM

<>

IS DISTINCT FROM

Same value

TRUE

TRUE

FALSE

FALSE

Different values

FALSE

FALSE

TRUE

TRUE

Both NULL

UNKNOWN

TRUE

UNKNOWN

FALSE

One NULL, one non-NULL

UNKNOWN

FALSE

UNKNOWN

TRUE

Examples
SELECT ID, NAME, TEACHER
FROM COURSES
WHERE START_DAY IS NOT DISTINCT FROM END_DAY;

-- PSQL fragment
IF (NEW.JOB IS DISTINCT FROM OLD.JOB)
THEN POST_EVENT 'JOB_CHANGED';
Boolean IS [NOT]
Available in

DSQL, PSQL

Syntax
<value> IS [NOT] { TRUE | FALSE | UNKNOWN }

The IS predicate with Boolean literal values checks if the expression on the left side matches the Boolean value on the right side. The expression on the left side must be of type BOOLEAN, otherwise an exception is raised.

The IS [NOT] UNKNOWN is equivalent to IS [NOT] NULL.

The right side of the predicate only accepts the literals TRUE, FALSE and UNKNOWN (and NULL). It does not accept expressions.

Using the IS predicate with a Boolean data type
-- Checking FALSE value
SELECT * FROM TBOOL WHERE BVAL IS FALSE;

ID            BVAL
============= =======
2             <false>

-- Checking UNKNOWN value
SELECT * FROM TBOOL WHERE BVAL IS UNKNOWN;

ID            BVAL
============= =======
3             <null>
See also

IS [NOT] NULL

IS [NOT] NULL
Available in

DSQL, PSQL, ESQL

Syntax
<value> IS [NOT] NULL

Since NULL is not a value, these operators are not comparison operators. The IS [NOT] NULL predicate tests that the expression on the left side has a value (IS NOT NULL) or has no value (IS NULL).

Example

Search for sales entries that have no shipment date set for them:

SELECT * FROM SALES
WHERE SHIP_DATE IS NULL;
Note regarding the IS predicates

Up to and including Firebird 2.5, the IS predicates, like the other comparison predicates, do not have precedence over the others. In Firebird 3.0 and higher, these predicates take precedence above the others.

4.2.3. Existential Predicates

This group of predicates includes those that use subqueries to submit values for all kinds of assertions in search conditions. Existential predicates are so called because they use various methods to test for the existence or non-existence of some condition, returning TRUE if the existence or non-existence is confirmed or FALSE otherwise.

EXISTS
Available in

DSQL, PSQL, ESQL

Syntax
[NOT] EXISTS (<select_stmt>)

The EXISTS predicate uses a subquery expression as its argument. It returns TRUE if the subquery result would contain at least one row; otherwise it returns FALSE.

NOT EXISTS returns FALSE if the subquery result would contain at least one row; it returns TRUE otherwise.

The subquery can specify multiple columns, or SELECT *, because the evaluation is made on the number of rows that match its criteria, not on the data.

Examples
  1. Find those employees who have projects.

    SELECT *
    FROM employee
    WHERE EXISTS(SELECT *
                 FROM  employee_project ep
                 WHERE ep.emp_no = employee.emp_no)
  2. Find those employees who have no projects.

    SELECT *
    FROM employee
    WHERE NOT EXISTS(SELECT *
                     FROM employee_project ep
                     WHERE ep.emp_no = employee.emp_no)
IN
Available in

DSQL, PSQL, ESQL

Syntax
<value> [NOT] IN (<select_stmt> | <value_list>)

<value_list> ::= <value_1> [, <value_2> …]

The IN predicate tests whether the value of the expression on the left side is present in the set of values specified on the right side. The set of values cannot have more than 1500 items. The IN predicate can be replaced with the following equivalent forms:

(<value> = <value_1> [OR <value> = <value_2> …])

<value> = { ANY | SOME } (<select_stmt>)

When the IN predicate is used in the search conditions of DML queries, the Firebird optimizer can use an index on the searched column, if a suitable one exists.

In its second form, the IN predicate tests whether the value of the expression on the left side is present — or not present, if NOT IN is used — in the result of the executed subquery on the right side.

The subquery must be specified to result in only one column, otherwise the error “count of column list and variable list do not match” will occur.

Queries specified using the IN predicate with a subquery can be replaced with a similar query using the EXISTS predicate. For instance, the following query:

SELECT
  model, speed, hd
FROM PC
WHERE
model IN (SELECT model
          FROM product
          WHERE maker = 'A');

can be replaced with a similar one using the EXISTS predicate:

SELECT
  model, speed, hd
FROM PC
WHERE
 EXISTS (SELECT *
         FROM product
         WHERE maker = 'A'
           AND product.model = PC.model);

However, a query using NOT IN with a subquery does not always give the same result as its NOT EXISTS counterpart. The reason is that EXISTS always returns TRUE or FALSE, whereas IN returns NULL in one of these two cases:

  1. when the test value is NULL and the IN () list is not empty

  2. when the test value has no match in the IN () list and at least one list element is NULL

It is in only these two cases that IN () will return NULL while the corresponding EXISTS predicate will return FALSE ('no matching row found'). In a search or, for example, an IF (…​) statement, both results mean “failure” and it makes no difference to the outcome.

But, for the same data, NOT IN () will return NULL, while NOT EXISTS will return TRUE, leading to opposite results.

As an example, suppose you have the following query:

-- Looking for people who were not born
-- on the same day as any famous New York citizen
SELECT P1.name AS NAME
FROM Personnel P1
WHERE P1.birthday NOT IN (SELECT C1.birthday
                          FROM Celebrities C1
                          WHERE C1.birthcity = 'New York');

Now, assume that the NY celebrities list is not empty and contains at least one NULL birthday. Then for every citizen who does not share his birthday with a NY celebrity, NOT IN will return NULL, because that is what IN does. The search condition is thereby not satisfied and the citizen will be left out of the SELECT result, which is wrong.

For citizens whose birthday does match with a celebrity’s birthday, NOT IN will correctly return FALSE, so they will be left out too, and no rows will be returned.

If the NOT EXISTS form is used:

-- Looking for people who were not born
-- on the same day as any famous New York citizen
SELECT P1.name AS NAME
FROM Personnel P1
WHERE NOT EXISTS (SELECT *
                  FROM Celebrities C1
                  WHERE C1.birthcity = 'New York'
                    AND C1.birthday = P1.birthday);

non-matches will have a NOT EXISTS result of TRUE and their records will be in the result set.

If there is any chance of NULLs being encountered when searching for a non-match, you will want to use NOT EXISTS.

Examples of use
  1. Find employees with the names “Pete”, “Ann” and “Roger”:

    SELECT *
    FROM EMPLOYEE
    WHERE FIRST_NAME IN ('Pete', 'Ann', 'Roger');
  2. Find all computers that have models whose manufacturer starts with the letter “A”:

    SELECT
      model, speed, hd
    FROM PC
    WHERE
      model IN (SELECT model
                FROM product
                WHERE maker STARTING WITH 'A');
See also

EXISTS

SINGULAR
Available in

DSQL, PSQL, ESQL

Syntax
[NOT] SINGULAR (<select_stmt>)

The SINGULAR predicate takes a subquery as its argument and evaluates it as TRUE if the subquery returns exactly one result row; otherwise the predicate is evaluated as FALSE. The subquery may list several output columns since the rows are not returned anyway. They are only tested for (singular) existence. For brevity, people usually specify ‘SELECT *’. The SINGULAR predicate can return only two values: TRUE or FALSE.

Example

Find those employees who have only one project.

SELECT *
FROM employee
WHERE SINGULAR(SELECT *
               FROM employee_project ep
               WHERE ep.emp_no = employee.emp_no)

4.2.4. Quantified Subquery Predicates

A quantifier is a logical operator that sets the number of objects for which this condition is true. It is not a numeric quantity, but a logical one that connects the condition with the full set of possible objects. Such predicates are based on logical universal and existential quantifiers that are recognised in formal logic.

In subquery expressions, quantified predicates make it possible to compare separate values with the results of subqueries; they have the following common form:

<value expression> <comparison operator> <quantifier> <subquery>
ALL
Available in

DSQL, PSQL, ESQL

Syntax
<value> <op> ALL (<select_stmt>)

When the ALL quantifier is used, the predicate is TRUE if every value returned by the subquery satisfies the condition in the predicate of the main query.

Example

Show only those clients whose ratings are higher than the rating of every client in Paris.

SELECT c1.*
FROM Customers c1
WHERE c1.rating > ALL
      (SELECT c2.rating
       FROM Customers c2
       WHERE c2.city = 'Paris')

If the subquery returns an empty set, the predicate is TRUE for every left-side value, regardless of the operator. This may appear to be contradictory, because every left-side value will thus be considered both smaller and greater than, both equal to and unequal to, every element of the right-side stream.

Nevertheless, it aligns perfectly with formal logic: if the set is empty, the predicate is true 0 times, i.e. for every row in the set.

ANY and SOME
Available in

DSQL, PSQL, ESQL

Syntax
<value> <op> {ANY | SOME} (<select_stmt>)

The quantifiers ANY and SOME are identical in their behaviour. Apparently, both are present in the SQL standard so that they could be used interchangeably in order to improve the readability of operators. When the ANY or the SOME quantifier is used, the predicate is TRUE if any of the values returned by the subquery satisfies the condition in the predicate of the main query. If the subquery would return no rows at all, the predicate is automatically considered as FALSE.

Example

Show only those clients whose ratings are higher than those of one or more clients in Rome.

SELECT *
FROM Customers
WHERE rating > ANY
      (SELECT rating
       FROM Customers
       WHERE city = 'Rome')

5. Data Definition (DDL) Statements

DDL is the data definition language subset of Firebird’s SQL language. DDL statements are used to create, modify and delete database objects that have been created by users. When a DDL statement is committed, the metadata for the object are created, changed or deleted.

5.1. DATABASE

This section describes how to create a database, connect to an existing database, alter the file structure of a database and how to delete one. It also explains how to back up a database in two quite different ways and how to switch the database to the “copy-safe” mode for performing an external backup safely.

5.1.1. CREATE DATABASE

Used for

Creating a new database

Available in

DSQL, ESQL

Syntax
CREATE {DATABASE | SCHEMA} <filespec>
  [<db_initial_option> [<db_initial_option> ...]]
  [<db_config_option> [<db_config_option> ...]]

<db_initial_option> ::=
    USER username
  | PASSWORD 'password'
  | ROLE rolename
  | PAGE_SIZE [=] size
  | LENGTH [=] num [PAGE[S]]
  | SET NAMES 'charset'

<db_config_option> ::=
    DEFAULT CHARACTER SET default_charset
      [COLLATION collation] -- not supported in ESQL
  | <sec_file>
  | DIFFERENCE FILE 'diff_file' -- not supported in ESQL

<filespec> ::= "'" [server_spec]{filepath | db_alias} "'"

<server_spec> ::=
    host[/{port | service}]:
  | \\host\
  | <protocol>://[host[:{port | service}]/]

<protocol> ::= inet | inet4 | inet6 | wnet | xnet

<sec_file> ::=
  FILE 'filepath'
  [LENGTH [=] num [PAGE[S]]
  [STARTING [AT [PAGE]] pagenum]

Each db_initial_option and db_config_option can occur at most once, except sec_file, which can occur zero or more times.

Table 20. CREATE DATABASE Statement Parameters
Parameter Description

filespec

File specification for primary database file

server_spec

Remote server specification. Some protocols require specifying a hostname. Optionally includes a port number or service name. Required if the database is created on a remote server.

filepath

Full path and file name including its extension. The file name must be specified according to the rules of the platform file system being used.

db_alias

Database alias previously created in the databases.conf file

host

Host name or IP address of the server where the database is to be created

port

The port number where the remote server is listening (parameter RemoteServicePort in firebird.conf file)

service

Service name. Must match the parameter value of RemoteServiceName in firebird.conf file)

username

Username of the owner of the new database. It may consist of up to 31 characters. The username can optionally be enclosed in single or double quotes. When a username is enclosed in double quotes, it is case-sensitive following the rules for quoted identifiers. When enclosed in single quotes, it behaves as if the value was specified without quotes. The user must be an administrator or have the CREATE DATABASE privilege.

password

Password of the user as the database owner. When using the Legacy_Auth authentication plugin, only the first 8 characters are used. Case-sensitive

rolename

The name of the role whose rights should be taken into account when creating a database. The role name can be enclosed in single or double quotes. When the role name is enclosed in double quotes, it is case-sensitive following the rules for quoted identifiers. When enclosed in single quotes, it behaves as if the value was specified without quotes.

size

Page size for the database, in bytes. Possible values are 4096, 8192 and 16384. The default page size is 8192.

num

Maximum size of the primary database file, or a secondary file, in pages

charset

Specifies the character set of the connection available to a client connecting after the database is successfully created. Single quotes are required.

default_charset

Specifies the default character set for string data types

collation

Default collation for the default character set

sec_file

File specification for a secondary file

pagenum

Starting page number for a secondary database file

diff_file

File path and name for DIFFERENCE files (.delta files) for backup mode

The CREATE DATABASE statement creates a new database. You can use CREATE DATABASE or CREATE SCHEMA. They are synonymous, but we recommend to always use CREATE DATABASE as this may change in a future version of Firebird.

A database may consist of one or several files. The first (main) file is called the primary file, subsequent files are called secondary file(s).

Multi-file Databases

Nowadays, multi-file databases are considered an anachronism. It made sense to use multi-file databases on old file systems where the size of any file is limited. For instance, you could not create a file larger than 4 GB on FAT32.

The primary file specification is the name of the database file and its extension with the full path to it according to the rules of the OS platform file system being used. The database file must not exist at the moment the database is being created. If it does exist, you will get an error message, and the database will not be created.

If the full path to the database is not specified, the database will be created in one of the system directories. The particular directory depends on the operating system. For this reason, unless you have a strong reason to prefer that situation, always specify either the absolute path or an alias, when creating a database.

Using a Database Alias

You can use aliases instead of the full path to the primary database file. Aliases are defined in the databases.conf file in the following format:

alias = filepath

Executing a CREATE DATABASE statement requires special consideration in the client application or database driver. As a result, it is not always possible to execute a CREATE DATABASE statement. Some drivers provide other ways to create databases. For example, Jaybird provides the class org.firebirdsql.management.FBManager to programmatically create a database.

If necessary, you can always fallback to isql to create a database.

Creating a Database on a Remote Server

If you create a database on a remote server, you need to specify the remote server specification. The remote server specification depends on the protocol being used. If you use the TCP/IP protocol to create a database, the primary file specification should look like this:

host[/{port|service}]:{filepath | db_alias}

If you use the Named Pipes protocol to create a database on a Windows server, the primary file specification should look like this:

\\host\{filepath | db_alias}

Since Firebird 3.0, there is also a unified URL-like syntax for the remote server specification. In this syntax, the first part specifies the name of the protocol, then a host name or IP address, port number, and path of the primary database file, or an alias.

The following values can be specified as the protocol:

INET

TCP/IP (first tries to connect using the IPv6 protocol, if it fails, then IPv4)

INET4

TCP/IP v4 (since Firebird 3.0.1)

INET6

TCP/IP v6 (since Firebird 3.0.1)

WNET

NetBEUI or Named Pipes Protocol

XNET

local protocol (does not include a host, port and service name)

<protocol>://[host[:{port | service}]/]{filepath | db_alias}
Optional Parameters for CREATE DATABASE
USER and PASSWORD

Clauses for specifying the username and the password, respectively, of an existing user in the security database (security3.fdb or whatever is configured in the SecurityDatabase configuration). You do not have to specify the username and password if the ISC_USER and ISC_PASSWORD environment variables are set. The user specified in the process of creating the database will be its owner. This will be important when considering database and object privileges.

ROLE

The ROLE clause specifies the name of the role (usually RDB$ADMIN), which will be taken into account when creating the database. The role must be assigned to the user in the applicable security database.

PAGE_SIZE

Clause for specifying the database page size. This size will be set for the primary file and all secondary files of the database. If you specify the database page size less than 4,096, it will be automatically rounded up to 4,096. Other values not equal to either 4,096, 8,192 or 16,384 will be changed to the closest smaller supported value. If the database page size is not specified, it is set to the default value of 8,192.

LENGTH

Clause specifying the maximum size of the primary or secondary database file, in pages. When a database is created, its primary and secondary files will occupy the minimum number of pages necessary to store the system data, regardless of the value specified in the LENGTH clause. The LENGTH value does not affect the size of the only (or last, in a multi-file database) file. The file will keep increasing its size automatically when necessary.

SET NAMES

Clause specifying the character set of the connection available after the database is successfully created. The character set NONE is used by default. Notice that the character set should be enclosed in a pair of apostrophes (single quotes).

DEFAULT CHARACTER SET

Clause specifying the default character set for creating data structures of string data types. Character sets are used for CHAR, VARCHAR and BLOB SUB_TYPE TEXT data types. The character set NONE is used by default. It is also possible to specify the default COLLATION for the default character set, making that collation sequence the default for the default character set. The default will be used for the entire database except where an alternative character set, with or without a specified collation, is used explicitly for a field, domain, variable, cast expression, etc.

STARTING AT

Clause that specifies the database page number at which the next secondary database file should start. When the previous file is completely filled with data according to the specified page number, the system will start adding new data to the next database file.

DIFFERENCE FILE

Clause specifying the path and name for the file delta that stores any mutations to the database file after it has been switched to the “copy-safe” mode by the ALTER DATABASE BEGIN BACKUP statement. For the detailed description of this clause, see ALTER DATABASE.

Specifying the Database Dialect

Databases are created in Dialect 3 by default. For the database to be created in SQL dialect 1, you will need to execute the statement SET SQL DIALECT 1 from script or the client application, e.g. in isql, before the CREATE DATABASE statement.

Who Can Create a Database

The CREATE DATABASE statement can be executed by:

Examples Using CREATE DATABASE
  1. Creating a database in Windows, located on disk D with a page size of 4,096. The owner of the database will be the user wizard. The database will be in Dialect , and will use WIN1251 as its default character set.

    SET SQL DIALECT 1;
    CREATE DATABASE 'D:\test.fdb'
    USER 'wizard' PASSWORD 'player'
    PAGE_SIZE = 4096 DEFAULT CHARACTER SET WIN1251;
  2. Creating a database in the Linux operating system with a page size of 8,192 (default). The owner of the database will be the user wizard. The database will be in Dialect 3 and will use UTF8 as its default character set, with UNICODE_CI_AI as the default collation.

    CREATE DATABASE '/home/firebird/test.fdb'
    USER 'wizard' PASSWORD 'player'
    DEFAULT CHARACTER SET UTF8 COLLATION UNICODE_CI_AI;
  3. Creating a database on the remote server “baseserver” with the path specified in the alias “test” that has been defined previously in the file databases.conf. The TCP/IP protocol is used. The owner of the database will be the user wizard. The database will be in Dialect 3 and will use UTF8 as its default character set.

    CREATE DATABASE 'baseserver:test'
    USER 'wizard' PASSWORD 'player'
    DEFAULT CHARACTER SET UTF8;
  4. Creating a database in Dialect 3 with UTF8 as its default character set. The primary file will contain up to 10,000 pages with a page size of 8,192. As soon as the primary file has reached the maximum number of pages, Firebird will start allocating pages to the secondary file test.fdb2. If that file is filled up to its maximum as well, test.fdb3 becomes the recipient of all new page allocations. As the last file, it has no page limit imposed on it by Firebird. New allocations will continue for as long as the file system allows it or until the storage device runs out of free space. If a LENGTH parameter were supplied for this last file, it would be ignored.

    SET SQL DIALECT 3;
    CREATE DATABASE 'baseserver:D:\test.fdb'
    USER 'wizard' PASSWORD 'player'
    PAGE_SIZE = 8192
    DEFAULT CHARACTER SET UTF8
    FILE 'D:\test.fdb2'
    STARTING AT PAGE 10001
    FILE 'D:\test.fdb3'
    STARTING AT PAGE 20001;
  5. Creating a database in Dialect 3 with UTF8 as its default character set. The primary file will contain up to 10,000 pages with a page size of 8,192. As far as file size and the use of secondary files are concerned, this database will behave exactly like the one in the previous example.

    SET SQL DIALECT 3;
    CREATE DATABASE 'baseserver:D:\test.fdb'
    USER 'wizard' PASSWORD 'player'
    PAGE_SIZE = 8192
    LENGTH 10000 PAGES
    DEFAULT CHARACTER SET UTF8
    FILE 'D:\test.fdb2'
    FILE 'D:\test.fdb3'
    STARTING AT PAGE 20001;

5.1.2. ALTER DATABASE

Used for

Altering the file organisation of a database, toggling its “copy-safe” state, managing encryption, and other database-wide configuration

Available in

DSQL, ESQL — limited feature set

Syntax
ALTER {DATABASE | SCHEMA} <alter_db_option> [<alter_db_option> ...]

<alter_db_option> :==
    <add_sec_clause>
  | {ADD DIFFERENCE FILE 'diff_file' | DROP DIFFERENCE FILE}
  | {BEGIN | END} BACKUP
  | SET DEFAULT CHARACTER SET charset
  | SET LINGER TO linger_duration
  | DROP LINGER
  | {ENCRYPT WITH plugin_name [KEY key_name] | DECRYPT}

<add_sec_clause> ::= ADD <sec_file> [<sec_file> ...]

<sec_file> ::=
  FILE 'filepath'
  [STARTING [AT [PAGE]] pagenum]
  [LENGTH [=] num [PAGE[S]]

Multiple files can be added in one ADD clause:

ALTER DATABASE
  ADD FILE x LENGTH 8000
    FILE y LENGTH 8000
    FILE z

Multiple occurrences of add_sec_clause (ADD FILE clauses) are allowed; an ADD FILE clause that adds multiple files (as in the example above) can be mixed with others that add only one file. The statement was documented incorrectly in the old InterBase 6 Language Reference.

Table 21. ALTER DATABASE Statement Parameters
Parameter Description

add_sec_clause

Adding a secondary database file

sec_file

File specification for secondary file

filepath

Full path and file name of the delta file or secondary database file

pagenum

Page number from which the secondary database file is to start

num

Maximum size of the secondary file in pages

diff_file

File path and name of the .delta file (difference file)

charset

New default character set of the database

linger_duration

Duration of linger delay in seconds; must be greater than or equal to 0 (zero)

plugin_name

The name of the encryption plugin

key_name

The name of the encryption key

The ALTER DATABASE statement can:

  • add secondary files to a database

  • switch a single-file database into and out of the “copy-safe” mode (DSQL only)

  • set or unset the path and name of the delta file for physical backups (DSQL only)

SCHEMA is currently a synonym for DATABASE; this may change in a future version, so we recommend to always use DATABASE

Who Can Alter the Database

The ALTER DATABASE statement can be executed by:

Parameters for ALTER DATABASE
ADD (FILE)

Adds secondary files to the database. It is necessary to specify the full path to the file and the name of the secondary file. The description for the secondary file is similar to the one given for the CREATE DATABASE statement.

ADD DIFFERENCE FILE

Specifies the path and name of the delta file that stores any mutations to the database whenever it is switched to the “copy-safe” mode. This clause does not actually add any file. It just overrides the default name and path of the .delta file. To change the existing settings, you should delete the previously specified description of the .delta file using the DROP DIFFERENCE FILE clause before specifying the new description of the delta file. If the path and name of the .delta file are not overridden, the file will have the same path and name as the database, but with the .delta file extension.

If only a file name is specified, the .delta file will be created in the current directory of the server. On Windows, this will be the system directory — a very unwise location to store volatile user files and contrary to Windows file system rules.

DROP DIFFERENCE FILE

Deletes the description (path and name) of the .delta file specified previously in the ADD DIFFERENCE FILE clause. The file is not actually deleted. DROP DIFFERENCE FILE deletes the path and name of the .delta file from the database header. Next time the database is switched to the “copy-safe” mode, the default values will be used (i.e. the same path and name as those of the database, but with the .delta extension).

BEGIN BACKUP

Switches the database to the “copy-safe” mode. ALTER DATABASE with this clause freezes the main database file, making it possible to back it up safely using file system tools, even if users are connected and performing operations with data. Until the backup state of the database is reverted to NORMAL, all changes made to the database will be written to the .delta (difference) file.

Despite its syntax, a statement with the BEGIN BACKUP clause does not start a backup process but just creates the conditions for doing a task that requires the database file to be read-only temporarily.

END BACKUP

Switches the database from the “copy-safe” mode to the normal mode. A statement with this clause merges the .delta file with the main database file and restores the normal operation of the database. Once the END BACKUP process starts, the conditions no longer exist for creating safe backups by means of file system tools.

Use of BEGIN BACKUP and END BACKUP and copying the database files with filesystem tools, is not safe with multi-file databases! Use this method only on single-file databases.

Making a safe backup with the gbak utility remains possible at all times, although it is not recommended running gbak while the database is in LOCKED or MERGE state.

SET DEFAULT CHARACTER SET

Changes the default character set of the database. This change does not affect existing data or columns. The new default character set will only be used in subsequent DDL commands.

SET LINGER TO

Sets the linger-delay. The linger-delay applies only to Firebird SuperServer, and is the number of seconds the server keeps a database file (and its caches) open after the last connection to that database was closed. This can help to improve performance at low cost, when the database is opened and closed frequently, by keeping resources “warm” for the next connection.

This mode can be useful for web applications - without a connection pool - where the connection to the database usually “lives” for a very short time.

The SET LINGER TO and DROP LINGER clauses can be combined in a single statement, but the last clause “wins”. For example, ALTER DATABASE SET LINGER TO 5 DROP LINGER will set the linger-delay to 0 (no linger), while ALTER DATABASE DROP LINGER SET LINGER to 5 will set the linger-delay to 5 seconds.

DROP LINGER

Drops the linger-delay (sets it to zero). Using DROP LINGER is equivalent to using SET LINGER TO 0.

Dropping LINGER is not an ideal solution for the occasional need to turn it off for some once-only condition where the server needs a forced shutdown. The gfix utility now has the -NoLinger switch, which will close the specified database immediately after the last attachment is gone, regardless of the LINGER setting in the database. The LINGER setting is retained and works normally the next time.

The same one-off override is also available through the Services API, using the tag isc_spb_prp_nolinger, e.g. (in one line):

fbsvcmgr host:service_mgr user sysdba password xxx
       action_properties dbname employee prp_nolinger

The DROP LINGER and SET LINGER TO clauses can be combined in a single statement, but the last clause “wins”.

ENCRYPT WITH

See Encrypting a Database in the Security chapter.

DECRYPT

See Decrypting a Database in the Security chapter.

Examples of ALTER DATABASE Usage
  1. Adding a secondary file to the database. As soon as 30000 pages are filled in the previous primary or secondary file, the Firebird engine will start adding data to the secondary file test4.fdb.

    ALTER DATABASE
      ADD FILE 'D:\test4.fdb'
        STARTING AT PAGE 30001;
  2. Specifying the path and name of the delta file:

    ALTER DATABASE
      ADD DIFFERENCE FILE 'D:\test.diff';
  3. Deleting the description of the delta file:

    ALTER DATABASE
      DROP DIFFERENCE FILE;
  4. Switching the database to the “copy-safe” mode:

    ALTER DATABASE
      BEGIN BACKUP;
  5. Switching the database back from the “copy-safe” mode to the normal operation mode:

    ALTER DATABASE
      END BACKUP;
  6. Changing the default character set for a database to WIN1251

    ALTER DATABASE
      SET DEFAULT CHARACTER SET WIN1252;
  7. Setting a linger-delay of 30 seconds

    ALTER DATABASE
      SET LINGER TO 30;
  8. Encrypting the database with a plugin called DbCrypt

    ALTER DATABASE
      ENCRYPT WITH DbCrypt;
  9. Decrypting the database

    ALTER DATABASE
      DECRYPT;

5.1.3. DROP DATABASE

Used for

Deleting the database to which you are currently connected

Available in

DSQL, ESQL

Syntax
DROP DATABASE

The DROP DATABASE statement deletes the current database. Before deleting a database, you have to connect to it. The statement deletes the primary file, all secondary files and all shadow files.

Contrary to CREATE DATABASE and ALTER DATABASE, DROP SCHEMA is not a valid alias for DROP DATABASE. This is intentional.

Who Can Drop a Database

The DROP DATABASE statement can be executed by:

Example of DROP DATABASE
Deleting the current database
DROP DATABASE;

5.2. SHADOW

A shadow is an exact, page-by-page copy of a database. Once a shadow is created, all changes made in the database are immediately reflected in the shadow. If the primary database file becomes unavailable for some reason, the DBMS will switch to the shadow.

This section describes how to create and delete shadow files.

5.2.1. CREATE SHADOW

Used for

Creating a shadow for the current database

Available in

DSQL, ESQL

Syntax
CREATE SHADOW <sh_num> [{AUTO | MANUAL}] [CONDITIONAL]
  'filepath' [LENGTH [=] num [PAGE[S]]]
  [<secondary_file> ...]

<secondary_file> ::=
  FILE 'filepath'
  [STARTING [AT [PAGE]] pagenum]
  [LENGTH [=] num [PAGE[S]]]
Table 22. CREATE SHADOW Statement Parameters
Parameter Description

sh_num

Shadow number — a positive number identifying the shadow set

filepath

The name of the shadow file and the path to it, in accord with the rules of the operating system

num

Maximum shadow size, in pages

secondary_file

Secondary file specification

page_num

The number of the page at which the secondary shadow file should start

The CREATE SHADOW statement creates a new shadow. The shadow starts duplicating the database right at the moment it is created. It is not possible for a user to connect to a shadow.

Like a database, a shadow may be multi-file. The number and size of a shadow’s files are not related to the number and size of the files of database it is shadowing.

The page size for shadow files is set to be equal to the database page size and cannot be changed.

If a calamity occurs involving the original database, the system converts the shadow to a copy of the database and switches to it. The shadow is then unavailable. What happens next depends on the MODE option.

AUTO | MANUAL Modes

When a shadow is converted to a database, it becomes unavailable. A shadow might alternatively become unavailable because someone accidentally deletes its file, or the disk space where the shadow files are stored is exhausted or is itself damaged.

  • If the AUTO mode is selected (the default value), shadowing ceases automatically, all references to it are deleted from the database header, and the database continues functioning normally.

    If the CONDITIONAL option was set, the system will attempt to create a new shadow to replace the lost one. It does not always succeed, however, and a new one may need to be created manually.

  • If the MANUAL mode attribute is set when the shadow becomes unavailable, all attempts to connect to the database and to query it will produce error messages. The database will remain inaccessible until either the shadow again becomes available, or the database administrator deletes it using the DROP SHADOW statement. MANUAL should be selected if continuous shadowing is more important than uninterrupted operation of the database.

Options for CREATE SHADOW
LENGTH

Specifies the maximum size of the primary or secondary shadow file in pages. The LENGTH value does not affect the size of the only shadow file, nor the last if it is a set. The last (or only) file will keep automatically growing as long as it is necessary.

STARTING AT

Specifies the shadow page number at which the next shadow file should start. The system will start adding new data to the next shadow file when the previous file is filled with data up to the specified page number.

You can verify the sizes, names and location of the shadow files by connecting to the database using isql and running the command SHOW DATABASE;

Who Can Create a Shadow

The CREATE SHADOW statement can be executed by:

Examples Using CREATE SHADOW
  1. Creating a shadow for the current database as “shadow number 1”:

    CREATE SHADOW 1 'g:\data\test.shd';
  2. Creating a multi-file shadow for the current database as “shadow number 2”:

    CREATE SHADOW 2 'g:\data\test.sh1'
      LENGTH 8000 PAGES
      FILE 'g:\data\test.sh2';

5.2.2. DROP SHADOW

Used for

Deleting a shadow from the current database

Available in

DSQL, ESQL

Syntax
DROP SHADOW sh_num
  [{DELETE | PRESERVE} FILE]
Table 23. DROP SHADOW Statement Parameter
Parameter Description

sh_num

Shadow number — a positive number identifying the shadow set

The DROP SHADOW statement deletes the specified shadow for the current database. When a shadow is dropped, all files related to it are deleted and shadowing to the specified sh_num ceases. The optional DELETE FILE clause makes this behaviour explicit. On the contrary, the PRESERVE FILE clause will remove the shadow from the database, but the file itself will not be deleted.

Who Can Drop a Shadow

The DROP SHADOW statement can be executed by:

Example of DROP SHADOW
Deleting “shadow number 1”.
DROP SHADOW 1;
See also

CREATE SHADOW

5.3. DOMAIN

DOMAIN is one of the object types in a relational database. A domain is created as a specific data type with some attributes attached to it. Once it has been defined in the database, it can be reused repeatedly to define table columns, PSQL arguments and PSQL local variables. Those objects inherit all of the attributes of the domain. Some attributes can be overridden when the new object is defined, if required.

This section describes the syntax of statements used to create, modify and delete domains. A detailed description of domains and their usage can be found in Custom Data Types — Domains.

5.3.1. CREATE DOMAIN

Used for

Creating a new domain

Available in

DSQL, ESQL

Syntax
CREATE DOMAIN name [AS] <datatype>
  [DEFAULT {<literal> | NULL | <context_var>}]
  [NOT NULL] [CHECK (<dom_condition>)]
  [COLLATE collation_name]

<datatype> ::=
  <scalar_datatype> | <blob_datatype> | <array_datatype>

<scalar_datatype> ::=
  !! See Scalar Data Types Syntax !!

<blob_datatype> ::=
  !! See BLOB Data Types Syntax !!

<array_datatype> ::=
  !! See Array Data Types Syntax !!

<dom_condition> ::=
    <val> <operator> <val>
  | <val> [NOT] BETWEEN <val> AND <val>
  | <val> [NOT] IN ({<val> [, <val> ...] | <select_list>})
  | <val> IS [NOT] NULL
  | <val> IS [NOT] DISTINCT FROM <val>
  | <val> [NOT] CONTAINING <val>
  | <val> [NOT] STARTING [WITH] <val>
  | <val> [NOT] LIKE <val> [ESCAPE <val>]
  | <val> [NOT] SIMILAR TO <val> [ESCAPE <val>]
  | <val> <operator> {ALL | SOME | ANY} (<select_list>)
  | [NOT] EXISTS (<select_expr>)
  | [NOT] SINGULAR (<select_expr>)
  | (<dom_condition>)
  | NOT <dom_condition>
  | <dom_condition> OR <dom_condition>
  | <dom_condition> AND <dom_condition>

<operator> ::=
    <> | != | ^= | ~= | = | < | > | <= | >=
  | !< | ^< | ~< | !> | ^> | ~>

<val> ::=
    VALUE
  | <literal>
  | <context_var>
  | <expression>
  | NULL
  | NEXT VALUE FOR genname
  | GEN_ID(genname, <val>)
  | CAST(<val> AS <cast_type>)
  | (<select_one>)
  | func([<val> [, <val> ...]])

<cast_type> ::= <domain_or_non_array_type> | <array_datatype>

<domain_or_non_array_type> ::=
  !! See Scalar Data Types Syntax !!
Table 24. CREATE DOMAIN Statement Parameters
Parameter Description

name

Domain name consisting of up to 31 characters

datatype

SQL data type

literal

A literal value that is compatible with datatype

context_var

Any context variable whose type is compatible with datatype

dom_condition

Domain condition

collation_name

Name of a collation sequence that is valid for charset_name, if it is supplied with datatype or, otherwise, is valid for the default character set of the database

select_one

A scalar SELECT statement — selecting one column and returning only one row

select_list

A SELECT statement selecting one column and returning zero or more rows

select_expr

A SELECT statement selecting one or more columns and returning zero or more rows

expression

An expression resolving to a value that is compatible with datatype

genname

Sequence (generator) name

func

Internal function or UDF

The CREATE DOMAIN statement creates a new domain.

Any SQL data type can be specified as the domain type.

Type-specific Details
Array Types
  • If the domain is to be an array, the base type can be any SQL data type except BLOB and array.

  • The dimensions of the array are specified between square brackets. (In the Syntax block, these brackets appear in quotes to distinguish them from the square brackets that identify optional syntax elements.)

  • For each array dimension, one or two integer numbers define the lower and upper boundaries of its index range:

    • By default, arrays are 1-based. The lower boundary is implicit and only the upper boundary need be specified. A single number smaller than 1 defines the range num..1 and a number greater than 1 defines the range 1..num.

    • Two numbers separated by a colon (‘:’) and optional whitespace, the second greater than the first, can be used to define the range explicitly. One or both boundaries can be less than zero, as long as the upper boundary is greater than the lower.

  • When the array has multiple dimensions, the range definitions for each dimension must be separated by commas and optional whitespace.

  • Subscripts are validated only if an array actually exists. It means that no error messages regarding invalid subscripts will be returned if selecting a specific element returns nothing or if an array field is NULL.

String Types

You can use the CHARACTER SET clause to specify the character set for the CHAR, VARCHAR and BLOB (SUB_TYPE TEXT) types. If the character set is not specified, the character set specified as DEFAULT CHARACTER SET of the database will be used. If no character set was specified then, the character set NONE is applied by default when you create a character domain.

With character set NONE, character data are stored and retrieved the way they were submitted. Data in any encoding can be added to a column based on such a domain, but it is impossible to add this data to a column with a different encoding. Because no transliteration is performed between the source and destination encodings, errors may result.

DEFAULT Clause

The optional DEFAULT clause allows you to specify a default value for the domain. This value will be added to the table column that inherits this domain when the INSERT statement is executed, if no value is specified for it in the DML statement. Local variables and arguments in PSQL modules that reference this domain will be initialized with the default value. For the default value, use a literal of a compatible type or a context variable of a compatible type.

NOT NULL Constraint

Columns and variables based on a domain with the NOT NULL constraint will be prevented from being written as NULL, i.e., a value is required.

When creating a domain, take care to avoid specifying limitations that would contradict one another. For instance, NOT NULL and DEFAULT NULL are contradictory.

CHECK Constraint(s)

The optional CHECK clause specifies constraints for the domain. A domain constraint specifies conditions that must be satisfied by the values of table columns or variables that inherit from the domain. A condition must be enclosed in parentheses. A condition is a logical expression (also called a predicate) that can return the Boolean results TRUE, FALSE and UNKNOWN. A condition is considered satisfied if the predicate returns the value TRUE or “unknown value” (equivalent to NULL). If the predicate returns FALSE, the condition for acceptance is not met.

VALUE Keyword

The keyword VALUE in a domain constraint substitutes for the table column that is based on this domain or for a variable in a PSQL module. It contains the value assigned to the variable or the table column. VALUE can be used anywhere in the CHECK constraint, though it is usually used in the left part of the condition.

COLLATE

The optional COLLATE clause allows you to specify the collation sequence if the domain is based on one of the string data types, including BLOBs with text subtypes. If no collation sequence is specified, the collation sequence will be the one that is default for the specified character set at the time the domain is created.

Who Can Create a Domain

The CREATE DOMAIN statement can be executed by:

CREATE DOMAIN Examples
  1. Creating a domain that can take values greater than 1,000, with a default value of 10,000.

    CREATE DOMAIN CUSTNO AS
      INTEGER DEFAULT 10000
      CHECK (VALUE > 1000);
  2. Creating a domain that can take the values 'Yes' and 'No' in the default character set specified during the creation of the database.

    CREATE DOMAIN D_BOOLEAN AS
      CHAR(3) CHECK (VALUE IN ('Yes', 'No'));
  3. Creating a domain with the UTF8 character set and the UNICODE_CI_AI collation sequence.

    CREATE DOMAIN FIRSTNAME AS
      VARCHAR(30) CHARACTER SET UTF8
      COLLATE UNICODE_CI_AI;
  4. Creating a domain of the DATE type that will not accept NULL and uses the current date as the default value.

    CREATE DOMAIN D_DATE AS
      DATE DEFAULT CURRENT_DATE
      NOT NULL;
  5. Creating a domain defined as an array of 2 elements of the NUMERIC(18, 3) type. The starting array index is 1.

    CREATE DOMAIN D_POINT AS
      NUMERIC(18, 3) [2];

    Domains defined over an array type may be used only to define table columns. You cannot use array domains to define local variables in PSQL modules.

  6. Creating a domain whose elements can be only country codes defined in the COUNTRY table.

    CREATE DOMAIN D_COUNTRYCODE AS CHAR(3)
      CHECK (EXISTS(SELECT * FROM COUNTRY
             WHERE COUNTRYCODE = VALUE));

    The example is given only to show the possibility of using predicates with queries in the domain test condition. It is not recommended to create this style of domain in practice unless the lookup table contains data that are never deleted.

5.3.2. ALTER DOMAIN

Used for

Altering the current attributes of a domain or renaming it

Available in

DSQL, ESQL

Syntax
ALTER DOMAIN domain_name
  [TO new_name]
  [TYPE <datatype>]
  [{SET DEFAULT {<literal> | NULL | <context_var>} | DROP DEFAULT}]
  [{SET | DROP} NOT NULL]
  [{ADD [CONSTRAINT] CHECK (<dom_condition>) | DROP CONSTRAINT}]

<datatype> ::=
   <scalar_datatype> | <blob_datatype>

<scalar_datatype> ::=
  !! See Scalar Data Types Syntax !!

<blob_datatype> ::=
  !! See BLOB Data Types Syntax !!

!! See also CREATE DOMAIN Syntax !!
Table 25. ALTER DOMAIN Statement Parameters
Parameter Description

new_name

New name for domain, consisting of up to 31 characters

literal

A literal value that is compatible with datatype

context_var

Any context variable whose type is compatible with datatype

The ALTER DOMAIN statement enables changes to the current attributes of a domain, including its name. You can make any number of domain alterations in one ALTER DOMAIN statement.

ALTER DOMAIN clauses
TO name

Use the TO clause to rename the domain, as long as there are no dependencies on the domain, i.e. table columns, local variables or procedure arguments referencing it.

SET DEFAULT

With the SET DEFAULT clause you can set a new default value. If the domain already has a default value, there is no need to delete it first — it will be replaced by the new one.

DROP DEFAULT

Use this clause to delete a previously specified default value and replace it with NULL.

SET NOT NULL

Use this class to add a NOT NULL constraint to the domain; columns or parameters of this domain will be prevented from being written as NULL, i.e., a value is required.

Adding a NOT NULL constraint to an existing domain will subject all columns using this comain to a full data validation, so ensure that the columns have no nulls before attempting the change.

DROP NOT NULL

Drop the NOT NULL constraint from the domain.

An explicit NOT NULL constraint on a column that depends on a domain prevails over the domain. In this situation, the modification of the domain to make it nullable does not propagate to the column.

ADD CONSTRAINT CHECK

Use the ADD CONSTRAINT CHECK clause to add a CHECK constraint to the domain. If the domain already has a CHECK constraint, it will have to be deleted first, using an ALTER DOMAIN statement that includes a DROP CONSTRAINT clause.

TYPE

The TYPE clause is used to change the data type of the domain to a different, compatible one. The system will forbid any change to the type that could result in data loss. An example would be if the number of characters in the new type were smaller than in the existing type.

When you alter the attributes of a domain, existing PSQL code may become invalid. For information on how to detect it, read the piece entitled The RDB$VALID_BLR Field in Appendix A.

What ALTER DOMAIN Cannot Alter
  • If the domain was declared as an array, it is not possible to change its type or its dimensions; nor can any other type be changed to an array type.

  • There is no way to change the default collation without dropping the domain and recreating it with the desired attributes.

Who Can Alter a Domain

The ALTER DOMAIN statement can be executed by:

  • Administrators

  • The owner of the domain

  • Users with the ALTER ANY DOMAIN privilege

Domain alterations can be prevented by dependencies from objects to which the user does not have sufficient privileges.

ALTER DOMAIN Examples
  1. Changing the data type to INTEGER and setting or changing the default value to 2,000:

    ALTER DOMAIN CUSTNO
      TYPE INTEGER
      SET DEFAULT 2000;
  2. Renaming a domain.

    ALTER DOMAIN D_BOOLEAN TO D_BOOL;
  3. Deleting the default value and adding a constraint for the domain:

    ALTER DOMAIN D_DATE
      DROP DEFAULT
      ADD CONSTRAINT CHECK (VALUE >= date '01.01.2000');
  4. Changing the CHECK constraint:

    ALTER DOMAIN D_DATE
      DROP CONSTRAINT;
    
    ALTER DOMAIN D_DATE
      ADD CONSTRAINT CHECK
        (VALUE BETWEEN date '01.01.1900' AND date '31.12.2100');
  5. Changing the data type to increase the permitted number of characters:

    ALTER DOMAIN FIRSTNAME
      TYPE VARCHAR(50) CHARACTER SET UTF8;
  6. Adding a NOT NULL constraint:

    ALTER DOMAIN FIRSTNAME
      SET NOT NULL;
  7. Removing a NOT NULL constraint:

    ALTER DOMAIN FIRSTNAME
      DROP NOT NULL;

5.3.3. DROP DOMAIN

Used for

Deleting an existing domain

Available in

DSQL, ESQL

Syntax
DROP DOMAIN domain_name

The DROP DOMAIN statement deletes a domain that exists in the database. It is not possible to delete a domain if it is referenced by any database table columns or used in any PSQL module. In order to delete a domain that is in use, all columns in all tables that refer to the domain will have to be dropped and all references to the domain will have to be removed from PSQL modules.

Who Can Drop a Domain

The DROP DOMAIN statement can be executed by:

  • Administrators

  • The owner of the domain

  • Users with the DROP ANY DOMAIN privilege

Example of DROP DOMAIN
Deleting the COUNTRYNAME domain
DROP DOMAIN COUNTRYNAME;

5.4. TABLE

As a relational DBMS, Firebird stores data in tables. A table is a flat, two-dimensional structure containing any number of rows. Table rows are often called records.

All rows in a table have the same structure and consist of columns. Table columns are often called fields. A table must have at least one column. Each column contains a single type of SQL data.

This section describes how to create, alter and delete tables in a database.

5.4.1. CREATE TABLE

Used for

creating a new table (relation)

Available in

DSQL, ESQL

Syntax
CREATE [GLOBAL TEMPORARY] TABLE tablename
  [EXTERNAL [FILE] 'filespec']
  (<col_def> [, {<col_def> | <tconstraint>} ...])
  [ON COMMIT {DELETE | PRESERVE} ROWS]

<col_def> ::=
    <regular_col_def>
  | <computed_col_def>
  | <identity_col_def>

<regular_col_def> ::=
  colname {<datatype> | domainname}
  [DEFAULT {<literal> | NULL | <context_var>}]
  [<col_constraint> ...]
  [COLLATE collation_name]

<computed_col_def> ::=
  colname [{<datatype> | domainname}]
  {COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)

<identity_col_def> ::=
  colname {<datatype> | domainname}
  GENERATED BY DEFAULT AS IDENTITY [(START WITH startvalue)]
  [<col_constraint> ...]

<datatype> ::=
    <scalar_datatype> | <blob_datatype> | <array_datatype>

<scalar_datatype> ::=
  !! See Scalar Data Types Syntax !!

<blob_datatype> ::=
  !! See BLOB Data Types Syntax !!

<array_datatype> ::=
  !! See Array Data Types Syntax !!

<col_constraint> ::=
  [CONSTRAINT constr_name]
    { PRIMARY KEY [<using_index>]
    | UNIQUE      [<using_index>]
    | REFERENCES other_table [(colname)] [<using_index>]
        [ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
        [ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
    | CHECK (<check_condition>)
    | NOT NULL }

<tconstraint> ::=
  [CONSTRAINT constr_name]
    { PRIMARY KEY (<col_list>) [<using_index>]
    | UNIQUE      (<col_list>) [<using_index>]
    | FOREIGN KEY (<col_list>)
        REFERENCES other_table [(<col_list>)] [<using_index>]
        [ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
        [ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
    | CHECK (<check_condition>) }

<col_list> ::= colname [, colname ...]

<using_index> ::= USING
  [ASC[ENDING] | DESC[ENDING]] INDEX indexname

<check_condition> ::=
    <val> <operator> <val>
  | <val> [NOT] BETWEEN <val> AND <val>
  | <val> [NOT] IN (<val> [, <val> ...] | <select_list>)
  | <val> IS [NOT] NULL
  | <val> IS [NOT] DISTINCT FROM <val>
  | <val> [NOT] CONTAINING <val>
  | <val> [NOT] STARTING [WITH] <val>
  | <val> [NOT] LIKE <val> [ESCAPE <val>]
  | <val> [NOT] SIMILAR TO <val> [ESCAPE <val>]
  | <val> <operator> {ALL | SOME | ANY} (<select_list>)
  | [NOT] EXISTS (<select_expr>)
  | [NOT] SINGULAR (<select_expr>)
  | (<check_condition>)
  | NOT <check_condition>
  | <check_condition> OR <check_condition>
  | <check_condition> AND <check_condition>

<operator> ::=
    <> | != | ^= | ~= | = | < | > | <= | >=
  | !< | ^< | ~< | !> | ^> | ~>

<val> ::=
    colname ['['array_idx [, array_idx ...]']']
  | <literal>
  | <context_var>
  | <expression>
  | NULL
  | NEXT VALUE FOR genname
  | GEN_ID(genname, <val>)
  | CAST(<val> AS <cast_type>)
  | (<select_one>)
  | func([<val> [, <val> ...]])

<cast_type> ::= <domain_or_non_array_type> | <array_datatype>

<domain_or_non_array_type> ::=
  !! See Scalar Data Types Syntax !!
Table 26. CREATE TABLE Statement Parameters
Parameter Description

tablename

Name (identifier) for the table. It may consist of up to 31 characters and must be unique in the database.

filespec

File specification (only for external tables). Full file name and path, enclosed in single quotes, correct for the local file system and located on a storage device that is physically connected to Firebird’s host computer.

colname

Name (identifier) for a column in the table. May consist of up to 31 characters and must be unique in the table.

datatype

SQL data type

domain_name

Domain name

start_value

The initial value of the identity column

col_constraint

Column constraint

tconstraint

Table constraint

constr_name

The name (identifier) of a constraint. May consist of up to 31 characters.

other_table

The name of the table referenced by the foreign key constraint

other_col

The name of the column in other_table that is referenced by the foreign key

literal

A literal value that is allowed in the given context

context_var

Any context variable whose data type is allowed in the given context

check_condition

The condition applied to a CHECK constraint, that will resolve as either true, false or NULL

collation

Collation

select_one

A scalar SELECT statement — selecting one column and returning only one row

select_list

A SELECT statement selecting one column and returning zero or more rows

select_expr

A SELECT statement selecting one or more columns and returning zero or more rows

expression

An expression resolving to a value that is allowed in the given context

genname

Sequence (generator) name

func

Internal function or UDF

The CREATE TABLE statement creates a new table. Any user can create it and its name must be unique among the names of all tables, views and stored procedures in the database.

A table must contain at least one column that is not computed, and the names of columns must be unique in the table.

A column must have either an explicit SQL data type, the name of a domain whose attributes will be copied for the column, or be defined as COMPUTED BY an expression (a calculated field).

A table may have any number of table constraints, including none.

Character Columns

You can use the CHARACTER SET clause to specify the character set for the CHAR, VARCHAR and BLOB (text subtype) types. If the character set is not specified, the default character set of the database - at time of the creation of the column - will be used. If the database has no default character set, the NONE character set is applied. In this case, data is stored and retrieved the way it was submitted. Data in any encoding can be added to such a column, but it is not possible to add this data to a column with a different encoding. No transliteration is performed between the source and destination encodings, which may result in errors.

The optional COLLATE clause allows you to specify the collation sequence for character data types, including BLOB SUB_TYPE TEXT. If no collation sequence is specified, the default collation sequence for the specified character set - at time of the creation of the column - is applied.

Setting a DEFAULT Value

The optional DEFAULT clause allows you to specify the default value for the table column. This value will be added to the column when an INSERT statement is executed if no value was specified for it and that column was omitted from the INSERT command.

The default value can be a literal of a compatible type, a context variable that is type-compatible with the data type of the column, or NULL, if the column allows it. If no default value is explicitly specified, NULL is implied.

An expression cannot be used as a default value.

Domain-based Columns

To define a column, you can use a previously defined domain. If the definition of a column is based on a domain, it may contain a new default value, additional CHECK constraints, and a COLLATE clause that will override the values specified in the domain definition. The definition of such a column may contain additional column constraints (for instance, NOT NULL), if the domain does not have it.

It is not possible to define a domain-based column that is nullable if the domain was defined with the NOT NULL attribute. If you want to have a domain that might be used for defining both nullable and non-nullable columns and variables, it is better practice defining the domain nullable and apply NOT NULL in the downstream column definitions and variable declarations.

Identity Columns (autoincrement)

Identity columns can be defined using the GENERATED BY DEFAULT AS IDENTITY clause. The identity column is the column associated with internal sequence generator. Its value is set automatically every time it is not specified in the INSERT statement. The optional START WITH clause allows you to specify an initial value other than 1.

Incorrect START WITH behaviour

The SQL standard requires that START WITH specifies the first value to be generated. Unfortunately, the current implementation in Firebird instead uses the specified value as the initial value of the internal generator backing the identity column. That means that right now it specifies the value before the first value that is generated.

This will be fixed in Firebird 4, see also CORE-6376.

Rules
  • The data type of an identity column must be an exact number type with zero scale. Allowed types are thus SMALLINT, INTEGER, BIGINT, NUMERIC(p[,0]) and DECIMAL(p[,0]).

  • An identity column cannot have a DEFAULT or COMPUTED value.

  • An identity column cannot be altered to become a regular column. The reverse is also true. Firebird 4 will introduce the option to alter an identity column to a regular column.

  • Identity columns are implicitly NOT NULL (non-nullable).

  • Uniqueness is not enforced automatically. A UNIQUE or PRIMARY KEY constraint is required to guarantee uniqueness.

  • The use of other methods of generating key values for identity columns, e.g. by trigger-generator code or by allowing users to change or add them, is discouraged to avoid unexpected key violations.

Calculated Fields

Calculated fields can be defined with the COMPUTED [BY] or GENERATED ALWAYS AS clause (according to the SQL:2003 standard). They mean the same. Describing the data type is not required (but possible) for calculated fields, as the DBMS calculates and stores the appropriate type as a result of the expression analysis. Appropriate operations for the data types included in an expression must be specified precisely.

If the data type is explicitly specified for a calculated field, the calculation result is converted to the specified type. This means, for instance, that the result of a numeric expression could be rendered as a string.

In a query that selects a COMPUTED BY column, the expression is evaluated for each row of the selected data.

Instead of a computed column, in some cases it makes sense to use a regular column whose value is evaluated in triggers for adding and updating data. It may reduce the performance of inserting/updating records, but it will increase the performance of data selection.

Defining an Array Column
  • If the column is to be an array, the base type can be any SQL data type except BLOB and array.

  • The dimensions of the array are specified between square brackets. (In the Syntax block these brackets appear in quotes to distinguish them from the square brackets that identify optional syntax elements.)

  • For each array dimension, one or two integer numbers define the lower and upper boundaries of its index range:

    • By default, arrays are 1-based. The lower boundary is implicit and only the upper boundary need be specified. A single number smaller than 1 defines the range num..1 and a number greater than 1 defines the range 1..num.

    • Two numbers separated by a colon (‘:’) and optional whitespace, the second greater than the first, can be used to define the range explicitly. One or both boundaries can be less than zero, as long as the upper boundary is greater than the lower.

  • When the array has multiple dimensions, the range definitions for each dimension must be separated by commas and optional whitespace.

  • Subscripts are validated only if an array actually exists. It means that no error messages regarding invalid subscripts will be returned if selecting a specific element returns nothing or if an array field is NULL.

Constraints

Five types of constraints can be specified. They are:

  • Primary key (PRIMARY KEY)

  • Unique key (UNIQUE)

  • Foreign key (REFERENCES)

  • CHECK constraint (CHECK)

  • NOT NULL constraint (NOT NULL)

Constraints can be specified at column level (“column constraints”) or at table level (“table constraints”). Table-level constraints are required when keys (unique constraint, Primary Key, Foreign Key) consist of multiple columns and when a CHECK constraint involves other columns in the row besides the column being defined. The NOT NULL constraint can only be specified as a column constraint. Syntax for some types of constraint may differ slightly according to whether the constraint is defined at the column or table level.

  • A column-level constraint is specified during a column definition, after all column attributes except COLLATION are specified, and can involve only the column specified in that definition

  • A table-level constraints can only be specified after the definitions of the columns used in the constraint.

  • Table-level constraints are a more flexible way to set constraints, since they can cater for constraints involving multiple columns

  • You can mix column-level and table-level constraints in the same CREATE TABLE statement

The system automatically creates the corresponding index for a primary key (PRIMARY KEY), a unique key (UNIQUE) and a foreign key (REFERENCES for a column-level constraint, FOREIGN KEY REFERENCES for one at the table level).

Names for Constraints and Their Indexes

Column-level constraints and their indexes are named automatically:

  • The constraint name has the form INTEG_n, where n represents one or more digits

  • The index name has the form RDB$PRIMARYn (for a primary key index), RDB$FOREIGNn (for a foreign key index) or RDB$n (for a unique key index). Again, n represents one or more digits.

Automatic naming of table-level constraints and their indexes follows the same pattern, unless the names are supplied explicitly.

Named Constraints

A constraint can be named explicitly if the CONSTRAINT clause is used for its definition. While the CONSTRAINT clause is optional for defining column-level constraints, it is mandatory for table-level constraints. By default, the constraint index will have the same name as the constraint. If a different name is wanted for the constraint index, a USING clause can be included.

The USING Clause

The USING clause allows you to specify a user-defined name for the index that is created automatically and, optionally, to define the direction of the index — either ascending (the default) or descending.

PRIMARY KEY

The PRIMARY KEY constraint is built on one or more key columns, where each column has the NOT NULL constraint specified. The values across the key columns in any row must be unique. A table can have only one primary key.

  • A single-column Primary Key can be defined as a column level or a table-level constraint

  • A multi-column Primary Key must be specified as a table-level constraint

The UNIQUE Constraint

The UNIQUE constraint defines the requirement of content uniqueness for the values in a key throughout the table. A table can contain any number of unique key constraints.

As with the Primary Key, the Unique constraint can be multi-column. If so, it must be specified as a table-level constraint.

NULL in Unique Keys

Firebird’s SQL-99-compliant rules for UNIQUE constraints allow one or more NULLs in a column with a UNIQUE constraint. That makes it possible to define a UNIQUE constraint on a column that does not have the NOT NULL constraint.

For UNIQUE keys that span multiple columns, the logic is a little complicated:

  • Multiple rows having null in all the columns of the key are allowed

  • Multiple rows having keys with different combinations of nulls and non-null values are allowed

  • Multiple rows having the same key columns null and the rest filled with non-null values are allowed, provided the values differ in at least one column

  • Multiple rows having the same key columns null and the rest filled with non-null values that are the same in every column will violate the constraint

The rules for uniqueness can be summarised thus:

In principle, all nulls are considered distinct. However, if two rows have exactly the same key columns filled with non-null values, the NULL columns are ignored and the uniqueness is determined on the non-null columns as though they constituted the entire key.
Illustration
RECREATE TABLE t( x int, y int, z int, unique(x,y,z));
INSERT INTO t values( NULL, 1, 1 );
INSERT INTO t values( NULL, NULL, 1 );
INSERT INTO t values( NULL, NULL, NULL );
INSERT INTO t values( NULL, NULL, NULL ); -- Permitted
INSERT INTO t values( NULL, NULL, 1 );    -- Not permitted
FOREIGN KEY

A Foreign Key ensures that the participating column(s) can contain only values that also exist in the referenced column(s) in the master table. These referenced columns are often called target columns. They must be the primary key or a unique key in the target table. They need not have a NOT NULL constraint defined on them although, if they are the primary key, they will, of course, have that constraint.

The foreign key columns in the referencing table itself do not require a NOT NULL constraint.

A single-column Foreign Key can be defined in the column declaration, using the keyword REFERENCES:

... ,
  ARTIFACT_ID INTEGER REFERENCES COLLECTION (ARTIFACT_ID),

The column ARTIFACT_ID in the example references a column of the same name in the table COLLECTIONS.

Both single-column and multi-column foreign keys can be defined at the table level. For a multi-column Foreign Key, the table-level declaration is the only option. This method also enables the provision of an optional name for the constraint:

...
  CONSTRAINT FK_ARTSOURCE FOREIGN KEY(DEALER_ID, COUNTRY)
    REFERENCES DEALER (DEALER_ID, COUNTRY),

Notice that the column names in the referenced (“master”) table may differ from those in the Foreign Key.

If no target columns are specified, the Foreign Key automatically references the target table’s Primary Key.

Foreign Key Actions

With the sub-clauses ON UPDATE and ON DELETE it is possible to specify an action to be taken on the affected foreign key column(s) when referenced values in the master table are changed:

NO ACTION

(the default) - Nothing is done

CASCADE

The change in the master table is propagated to the corresponding row(s) in the child table. If a key value changes, the corresponding key in the child records changes to the new value; if the master row is deleted, the child records are deleted.

SET DEFAULT

The Foreign Key columns in the affected rows will be set to their default values as they were when the foreign key constraint was defined.

SET NULL

The Foreign Key columns in the affected rows will be set to NULL.

The specified action, or the default NO ACTION, could cause a Foreign Key column to become invalid. For example, it could get a value that is not present in the master table, or it could become NULL while the column has a NOT NULL constraint. Such conditions will cause the operation on the master table to fail with an error message.

Example
...
  CONSTRAINT FK_ORDERS_CUST
    FOREIGN KEY (CUSTOMER) REFERENCES CUSTOMERS (ID)
      ON UPDATE CASCADE ON DELETE SET NULL
CHECK Constraint

The CHECK constraint defines the condition the values inserted in this column must satisfy. A condition is a logical expression (also called a predicate) that can return the TRUE, FALSE and UNKNOWN values. A condition is considered satisfied if the predicate returns TRUE or value UNKNOWN (equivalent to NULL). If the predicate returns FALSE, the value will not be accepted. This condition is used for inserting a new row into the table (the INSERT statement) and for updating the existing value of the table column (the UPDATE statement) and also for statements where one of these actions may take place (UPDATE OR INSERT, MERGE).

A CHECK constraint on a domain-based column does not replace an existing CHECK condition on the domain, but becomes an addition to it. The Firebird engine has no way, during definition, to verify that the extra CHECK does not conflict with the existing one.

CHECK constraints — whether defined at table level or column level — refer to table columns by their names. The use of the keyword VALUE as a placeholder — as in domain CHECK constraints — is not valid in the context of defining column constraints.

Example

with two column-level constraints and one at table-level:

CREATE TABLE PLACES (
  ...
  LAT DECIMAL(9, 6) CHECK (ABS(LAT) <=  90),
  LON DECIMAL(9, 6) CHECK (ABS(LON) <= 180),
  ...
  CONSTRAINT CHK_POLES CHECK (ABS(LAT) < 90 OR LON = 0)
);
NOT NULL constraint

In Firebird, columns are nullable by default. The NOT NULL constraint specifies that the column cannot take NULL in place of a value.

A NOT NULL constraint can only be defined as a column constraint, not as a table constraint.

Who Can Create a Table

The CREATE TABLE statement can be executed by:

The user executing the CREATE TABLE statement becomes the owner of the table.

CREATE TABLE Examples
  1. Creating the COUNTRY table with the primary key specified as a column constraint.

    CREATE TABLE COUNTRY (
      COUNTRY COUNTRYNAME NOT NULL PRIMARY KEY,
      CURRENCY VARCHAR(10) NOT NULL
    );
  2. Creating the STOCK table with the named primary key specified at the column level and the named unique key specified at the table level.

    CREATE TABLE STOCK (
      MODEL     SMALLINT NOT NULL CONSTRAINT PK_STOCK PRIMARY KEY,
      MODELNAME CHAR(10) NOT NULL,
      ITEMID    INTEGER NOT NULL,
      CONSTRAINT MOD_UNIQUE UNIQUE (MODELNAME, ITEMID)
    );
  3. Creating the JOB table with a primary key constraint spanning two columns, a foreign key constraint for the COUNTRY table and a table-level CHECK constraint. The table also contains an array of 5 elements.

    CREATE TABLE JOB (
      JOB_CODE        JOBCODE NOT NULL,
      JOB_GRADE       JOBGRADE NOT NULL,
      JOB_COUNTRY     COUNTRYNAME,
      JOB_TITLE       VARCHAR(25) NOT NULL,
      MIN_SALARY      NUMERIC(18, 2) DEFAULT 0 NOT NULL,
      MAX_SALARY      NUMERIC(18, 2) NOT NULL,
      JOB_REQUIREMENT BLOB SUB_TYPE 1,
      LANGUAGE_REQ    VARCHAR(15) [1:5],
      PRIMARY KEY (JOB_CODE, JOB_GRADE),
      FOREIGN KEY (JOB_COUNTRY) REFERENCES COUNTRY (COUNTRY)
      ON UPDATE CASCADE
      ON DELETE SET NULL,
      CONSTRAINT CHK_SALARY CHECK (MIN_SALARY < MAX_SALARY)
    );
  4. Creating the PROJECT table with primary, foreign and unique key constraints with custom index names specified with the USING clause.

    CREATE TABLE PROJECT (
      PROJ_ID     PROJNO NOT NULL,
      PROJ_NAME   VARCHAR(20) NOT NULL UNIQUE USING DESC INDEX IDX_PROJNAME,
      PROJ_DESC   BLOB SUB_TYPE 1,
      TEAM_LEADER EMPNO,
      PRODUCT     PRODTYPE,
      CONSTRAINT PK_PROJECT PRIMARY KEY (PROJ_ID) USING INDEX IDX_PROJ_ID,
      FOREIGN KEY (TEAM_LEADER) REFERENCES EMPLOYEE (EMP_NO)
        USING INDEX IDX_LEADER
    );
  5. Creating a table with an identity column

    create table objects (
      id integer generated by default as identity primary key,
      name varchar(15)
    );
    
    insert into objects (name) values ('Table');
    insert into objects (id, name) values (10, 'Computer');
    insert into objects (name) values ('Book');
    
    select * from objects order by id;
    
              ID NAME
    ============ ===============
               1 Table
               2 Book
              10 Computer
  6. Creating the SALARY_HISTORY table with two computed fields. The first one is declared according to the SQL:2003 standard, while the second one is declared according to the traditional declaration of computed fields in Firebird.

    CREATE TABLE SALARY_HISTORY (
      EMP_NO         EMPNO NOT NULL,
      CHANGE_DATE    TIMESTAMP DEFAULT 'NOW' NOT NULL,
      UPDATER_ID     VARCHAR(20) NOT NULL,
      OLD_SALARY     SALARY NOT NULL,
      PERCENT_CHANGE DOUBLE PRECISION DEFAULT 0 NOT NULL,
      SALARY_CHANGE  GENERATED ALWAYS AS
        (OLD_SALARY * PERCENT_CHANGE / 100),
      NEW_SALARY     COMPUTED BY
        (OLD_SALARY + OLD_SALARY * PERCENT_CHANGE / 100)
    );
Global Temporary Tables (GTT)

Global temporary tables have persistent metadata, but their contents are transaction-bound (the default) or connection-bound. Every transaction or connection has its own private instance of a GTT, isolated from all the others. Instances are only created if and when the GTT is referenced. They are destroyed when the transaction ends or on disconnection. The metadata of a GTT can be modified or removed using ALTER TABLE and DROP TABLE, respectively.

Syntax
CREATE GLOBAL TEMPORARY TABLE tablename
  (<column_def> [, {<column_def> | <table_constraint>} ...])
  [ON COMMIT {DELETE | PRESERVE} ROWS]
Syntax notes
  • ON COMMIT DELETE ROWS creates a transaction-level GTT (the default), ON COMMIT PRESERVE ROWS a connection-level GTT

  • An EXTERNAL [FILE] clause is not allowed in the definition of a global temporary table

Since Firebird 3.0, GTTs are writable in read-only transactions. The effect is as follows:

Read-only transaction in read-write database

Writable in both ON COMMIT PRESERVE ROWS and ON COMMIT DELETE ROWS

Read-only transaction in read-only database

Writable in ON COMMIT DELETE ROWS only

Restrictions on GTTs

GTTs can be “dressed up” with all the features and paraphernalia of ordinary tables (keys, references, indexes, triggers and so on) but there are a few restrictions:

  • GTTs and regular tables cannot reference one another

  • A connection-bound (“PRESERVE ROWS”) GTT cannot reference a transaction-bound (“DELETE ROWS”) GTT

  • Domain constraints cannot reference any GTT

  • The destruction of a GTT instance at the end of its life cycle does not cause any BEFORE/AFTER delete triggers to fire

In an existing database, it is not always easy to distinguish a regular table from a GTT, or a transaction-level GTT from a connection-level GTT. Use this query to find out what type of table you are looking at:

select t.rdb$type_name
from rdb$relations r
join rdb$types t on r.rdb$relation_type = t.rdb$type
where t.rdb$field_name = 'RDB$RELATION_TYPE'
and r.rdb$relation_name = 'TABLENAME'

For an overview of the types of all the relations in the database:

select r.rdb$relation_name, t.rdb$type_name
from rdb$relations r
join rdb$types t on r.rdb$relation_type = t.rdb$type
where t.rdb$field_name = 'RDB$RELATION_TYPE'
and coalesce (r.rdb$system_flag, 0) = 0

The RDB$TYPE_NAME field will show PERSISTENT for a regular table, VIEW for a view, GLOBAL_TEMPORARY_PRESERVE for a connection-bound GTT and GLOBAL_TEMPORARY_DELETE for a transaction_bound GTT.

Examples of Global Temporary Tables
  1. Creating a connection-scoped global temporary table.

    CREATE GLOBAL TEMPORARY TABLE MYCONNGTT (
      ID  INTEGER NOT NULL PRIMARY KEY,
      TXT VARCHAR(32),
      TS  TIMESTAMP DEFAULT CURRENT_TIMESTAMP)
    ON COMMIT PRESERVE ROWS;
  2. Creating a transaction-scoped global temporary table that uses a foreign key to reference a connection-scoped global temporary table. The ON COMMIT sub-clause is optional because DELETE ROWS is the default.

    CREATE GLOBAL TEMPORARY TABLE MYTXGTT (
      ID        INTEGER NOT NULL PRIMARY KEY,
      PARENT_ID INTEGER NOT NULL REFERENCES MYCONNGTT(ID),
      TXT       VARCHAR(32),
      TS        TIMESTAMP DEFAULT CURRENT_TIMESTAMP
    ) ON COMMIT DELETE ROWS;
External Tables

The optional EXTERNAL [FILE] clause specifies that the table is stored outside the database in an external text file of fixed-length records. The columns of a table stored in an external file can be of any type except BLOB or ARRAY, although for most purposes, only columns of CHAR types would be useful.

All you can do with a table stored in an external file is insert new rows (INSERT) and query the data (SELECT). Updating existing data (UPDATE) and deleting rows (DELETE) are not possible.

A file that is defined as an external table must be located on a storage device that is physically present on the machine where the Firebird server runs and, if the parameter ExternalFileAccess in the firebird.conf configuration file is Restrict, it must be in one of the directories listed there as the argument for Restrict. If the file does not exist yet, Firebird will create it on first access.

The ability to use external files for a table depends on the value set for the ExternalFileAccess parameter in firebird.conf:

  • If it is set to None (the default), any attempt to access an external file will be denied.

  • The Restrict setting is recommended, for restricting external file access to directories created explicitly for the purpose by the server administrator. For example:

    • ExternalFileAccess = Restrict externalfiles will restrict access to a directory named externalfiles directly beneath the Firebird root directory

    • ExternalFileAccess = d:\databases\outfiles; e:\infiles will restrict access to just those two directories on the Windows host server. Note that any path that is a network mapping will not work. Paths enclosed in single or double quotes will not work, either.

  • If this parameter is set to Full, external files may be accessed anywhere on the host file system. This creates a security vulnerability and is not recommended.

External File Format

The “row” format of the external table is fixed length and binary. There are no field delimiters: both field and row boundaries are determined by maximum sizes, in bytes, of the field definitions. It is important to keep this in mind, both when defining the structure of the external table and when designing an input file for an external table that is to import data from another application. The ubiquitous “.csv” format, for example, is of no use as an input file and cannot be generated directly into an external file.

The most useful data type for the columns of external tables is the fixed-length CHAR type, of suitable lengths for the data they are to carry. Date and number types are easily cast to and from strings whereas, unless the files are to be read by another Firebird database, the native data types — binary data — will appear to external applications as unparseable “alphabetti”.

Of course, there are ways to manipulate typed data so as to generate output files from Firebird that can be read directly as input files to other applications, using stored procedures, with or without employing external tables. Such techniques are beyond the scope of a language reference. Here, we provide some guidelines and tips for producing and working with simple text files, since the external table feature is often used as an easy way to produce or read transaction-independent logs that can be studied off-line in a text editor or auditing application.

Row Delimiters

Generally, external files are more useful if rows are separated by a delimiter, in the form of a “newline” sequence that is recognised by reader applications on the intended platform. For most contexts on Windows, it is the two-byte 'CRLF' sequence, carriage return (ASCII code decimal 13) and line feed (ASCII code decimal 10). On POSIX, LF on its own is usual; for some MacOSX applications, it may be LFCR. There are various ways to populate this delimiter column. In our example below, it is done by using a BEFORE INSERT trigger and the internal function ASCII_CHAR.

External Table Example

For our example, we will define an external log table that might be used by an exception handler in a stored procedure or trigger. The external table is chosen because the messages from any handled exceptions will be retained in the log, even if the transaction that launched the process is eventually rolled back because of another, unhandled exception. For demonstration purposes, it has just two data columns, a time stamp and a message. The third column stores the row delimiter:

CREATE TABLE ext_log
  EXTERNAL FILE 'd:\externals\log_me.txt' (
  stamp CHAR (24),
  message CHAR(100),
  crlf CHAR(2) -- for a Windows context
);
COMMIT;

Now, a trigger, to write the timestamp and the row delimiter each time a message is written to the file:

SET TERM ^;
CREATE TRIGGER bi_ext_log FOR ext_log
ACTIVE BEFORE INSERT
AS
BEGIN
  IF (new.stamp is NULL) then
    new.stamp = CAST (CURRENT_TIMESTAMP as CHAR(24));
  new.crlf = ASCII_CHAR(13) || ASCII_CHAR(10);
END ^
COMMIT ^
SET TERM ;^

Inserting some records (which could have been done by an exception handler or a fan of Shakespeare):

insert into ext_log (message)
values('Shall I compare thee to a summer''s day?');
insert into ext_log (message)
values('Thou art more lovely and more temperate');

The output:

2015-10-07 15:19:03.4110Shall I compare thee to a summer's day?
2015-10-07 15:19:58.7600Thou art more lovely and more temperate

5.4.2. ALTER TABLE

Used for

Altering the structure of a table.

Available in

DSQL, ESQL

Syntax
ALTER TABLE tablename
  <operation> [, <operation> ...]

<operation> ::=
    ADD <col_def>
  | ADD <tconstraint>
  | DROP colname
  | DROP CONSTRAINT constr_name
  | ALTER [COLUMN] colname <col_mod>

<col_def> ::=
    <regular_col_def>
  | <computed_col_def>
  | <identity_col_def>

<regular_col_def> ::=
  colname {<datatype> | domainname}
  [DEFAULT {<literal> | NULL | <context_var>}]
  [<col_constraint> ...]
  [COLLATE collation_name]

<computed_col_def> ::=
  colname [{<datatype> | domainname}]
  {COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)

<identity_col_def> ::=
  colname {<datatype> | domainname}
  GENERATED BY DEFAULT AS IDENTITY [(START WITH startvalue)]
  [<col_constraint> ...]

<col_mod> ::=
    TO newname
  | POSITION newpos
  | <regular_col_mod>
  | <computed_col_mod>
  | <identity_col_mod>

<regular_col_mod> ::=
    TYPE {<datatype> | domainname}
  | SET DEFAULT {<literal> | NULL | <context_var>}
  | DROP DEFAULT
  | {SET | DROP} NOT NULL

<computed_col_mod> ::=
    [TYPE <datatype>] {COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)

<identity_col_mod> ::=
    RESTART [WITH startvalue]

!! See CREATE TABLE syntax` for further rules !!
Table 27. ALTER TABLE Statement Parameters
Parameter Description

tablename

Name (identifier) of the table

operation

One of the available operations altering the structure of the table

colname

Name (identifier) for a column in the table, max. 31 characters. Must be unique in the table.

domain_name

Domain name

newname

New name (identifier) for the column, max. 31 characters. Must be unique in the table.

newpos

The new column position (an integer between 1 and the number of columns in the table)

start_value

The first value of the identity column after restart

other_table

The name of the table referenced by the foreign key constraint

literal

A literal value that is allowed in the given context

context_var

A context variable whose type is allowed in the given context

check_condition

The condition of a CHECK constraint that will be satisfied if it evaluates to TRUE or UNKNOWN/NULL

collation

Name of a collation sequence that is valid for charset_name, if it is supplied with datatype or, otherwise, is valid for the default character set of the database

The ALTER TABLE statement changes the structure of an existing table. With one ALTER TABLE statement it is possible to perform multiple operations, adding/dropping columns and constraints and also altering column specifications.

Multiple operations in an ALTER TABLE statement are separated with commas.

Version Count Increments

Some changes in the structure of a table increment the metadata change counter (“version count”) assigned to every table. The number of metadata changes is limited to 255 for each table. Once the counter reaches the 255 limit, you will not be able to make any further changes to the structure of the table without resetting the counter.

To reset the metadata change counter
You need to back up and restore the database using the gbak utility.
The ADD Clause

With the ADD clause you can add a new column or a new table constraint. The syntax for defining the column and the syntax of defining the table constraint correspond with those described for CREATE TABLE statement.

Effect on Version Count
  • Each time a new column is added, the metadata change counter is increased by one

  • Adding a new table constraint does not increase the metadata change counter

Points to Be Aware of
  1. Adding a column with a NOT NULL constraint without a DEFAULT value will — since Firebird 3.0 — fail if the table has existing rows. When adding a non-nullable column, it is recommended either to set a default value for it, or to create it as nullable, update the column in existing rows with a non-null value, and then add a NOT NULL constraint.

  2. When a new CHECK constraint is added, existing data is not tested for compliance. Prior testing of existing data against the new CHECK expression is recommended.

  3. Although adding an identity column is supported, this will only succeed if the table is empty. Adding an identity column will fail if the table has one or more rows.

The DROP Clause

The DROP colname clause deletes the specified column from the table. An attempt to drop a column will fail if anything references it. Consider the following items as sources of potential dependencies:

  • column or table constraints

  • indexes

  • stored procedures and triggers

  • views

Effect on Version Count
  • Each time a column is dropped, the table’s metadata change counter is increased by one.

The DROP CONSTRAINT Clause

The DROP CONSTRAINT clause deletes the specified column-level or table-level constraint.

A PRIMARY KEY or UNIQUE key constraint cannot be deleted if it is referenced by a FOREIGN KEY constraint in another table. It will be necessary to drop that FOREIGN KEY constraint before attempting to drop the PRIMARY KEY or UNIQUE key constraint it references.

Effect on Version Count
  • Deleting a column constraint or a table constraint does not increase the metadata change counter.

The ALTER [COLUMN] Clause

With the ALTER [COLUMN] clause, attributes of existing columns can be modified without the need to drop and re-add the column. Permitted modifications are:

  • change the name (does not affect the metadata change counter)

  • change the data type (increases the metadata change counter by one)

  • change the column position in the column list of the table (does not affect the metadata change counter)

  • delete the default column value (does not affect the metadata change counter)

  • set a default column value or change the existing default (does not affect the metadata change counter)

  • change the type and expression for a computed column (does not affect the metadata change counter)

  • set the NOT NULL constraint (does not affect the metadata change counter)

  • drop the NOT NULL constraint (does not affect the metadata change counter)

Renaming a Column: the TO Clause

The TO keyword with a new identifier renames an existing column. The table must not have an existing column that has the same identifier.

It will not be possible to change the name of a column that is included in any constraint: PRIMARY KEY, UNIQUE key, FOREIGN KEY, column constraint or the CHECK constraint of the table.

Renaming a column will also be disallowed if the column is used in any trigger, stored procedure or view.

Changing the Data Type of a Column: the TYPE Clause

The keyword TYPE changes the data type of an existing column to another, allowable type. A type change that might result in data loss will be disallowed. As an example, the number of characters in the new type for a CHAR or VARCHAR column cannot be smaller than the existing specification for it.

If the column was declared as an array, no change to its type or its number of dimensions is permitted.

The data type of a column that is involved in a foreign key, primary key or unique constraint cannot be changed at all.

Changing the Position of a Column: the POSITION Clause

The POSITION keyword changes the position of an existing column in the notional “left-to-right” layout of the record.

Numbering of column positions starts at 1.

  • If a position less than 1 is specified, an error message will be returned

  • If a position number is greater than the number of columns in the table, its new position will be adjusted silently to match the number of columns.

The DROP DEFAULT and SET DEFAULT Clauses

The optional DROP DEFAULT clause deletes the default value for the column if it was put there previously by a CREATE TABLE or ALTER TABLE statement.

  • If the column is based on a domain with a default value, the default value will revert to the domain default

  • An execution error will be raised if an attempt is made to delete the default value of a column which has no default value or whose default value is domain-based

The optional SET DEFAULT clause sets a default value for the column. If the column already has a default value, it will be replaced with the new one. The default value applied to a column always overrides one inherited from a domain.

The SET NOT NULL and DROP NOT NULL Clauses

The SET NOT NULL clause adds a NOT NULL constraint on an existing table column. Contrary to definition in CREATE TABLE, it is not possible to specify a constraint name.

The successful addition of the NOT NULL constraint is subject to a full data validation on the table, so ensure that the column has no nulls before attempting the change.

An explicit NOT NULL constraint on domain-based column overrides domain settings. In this scenario, changing the domain to be nullable does not extend to a table column.

Dropping the NOT NULL constraint from the column if its type is a domain that also has a NOT NULL constraint, has no observable effect until the NOT NULL constraint is dropped from the domain as well.

The COMPUTED [BY] or GENERATED ALWAYS AS Clauses

The data type and expression underlying a computed column can be modified using a COMPUTED [BY] or GENERATED ALWAYS AS clause in the ALTER TABLE ALTER [COLUMN] statement. Converting a regular column to a computed one and vice versa are not permitted.

Changing Identity Columns

For identity columns (GENERATED BY DEFAULT AS IDENTITY), it is possible to restart the sequence used for generating identity values. If only the RESTART clause is specified, then the sequence resets to the initial value specified during CREATE TABLE. If the optional WITH start_value clause is specified, the sequence will restart with the specified value.

It is not possible to convert an existing column to an identity column, or to convert an identity column to a normal column. Firebird 4 will introduce the ability to convert an identity column to a normal column.

Restarting is currently subject to a bug: the first value generated after a restart is 1 (one) higher than the configured initial value (or the value specified through WITH) . See also Identity Columns (autoincrement).

Attributes that Cannot Be Altered

The following alterations are not supported:

  • Changing the collation of a character type column

Who Can Alter a Table?

The ALTER TABLE statement can be executed by:

  • Administrators

  • The owner of the table

  • Users with the ALTER ANY TABLE privilege

Examples Using ALTER TABLE
  1. Adding the CAPITAL column to the COUNTRY table.

    ALTER TABLE COUNTRY
      ADD CAPITAL VARCHAR(25);
  2. Adding the CAPITAL column with the NOT NULL and UNIQUE constraint and deleting the CURRENCY column.

    ALTER TABLE COUNTRY
      ADD CAPITAL VARCHAR(25) NOT NULL UNIQUE,
      DROP CURRENCY;
  3. Adding the CHK_SALARY check constraint and a foreign key to the JOB table.

    ALTER TABLE JOB
      ADD CONSTRAINT CHK_SALARY CHECK (MIN_SALARY < MAX_SALARY),
      ADD FOREIGN KEY (JOB_COUNTRY) REFERENCES COUNTRY (COUNTRY);
  4. Setting default value for the MODEL field, changing the type of the ITEMID column and renaming the MODELNAME column.

    ALTER TABLE STOCK
      ALTER COLUMN MODEL SET DEFAULT 1,
      ALTER COLUMN ITEMID TYPE BIGINT,
      ALTER COLUMN MODELNAME TO NAME;
  5. Restarting the sequence of an identity column.

    ALTER TABLE objects
      ALTER ID RESTART WITH 100;
  6. Changing the computed columns NEW_SALARY and SALARY_CHANGE.

    ALTER TABLE SALARY_HISTORY
      ALTER NEW_SALARY GENERATED ALWAYS AS
        (OLD_SALARY + OLD_SALARY * PERCENT_CHANGE / 100),
      ALTER SALARY_CHANGE COMPUTED BY
        (OLD_SALARY * PERCENT_CHANGE / 100);

5.4.3. DROP TABLE

Used for

Dropping (deleting) a table

Available in

DSQL, ESQL

Syntax
DROP TABLE tablename
Table 28. DROP TABLE Statement Parameter
Parameter Description

tablename

Name (identifier) of the table

The DROP TABLE statement drops (deletes) an existing table. If the table has dependencies, the DROP TABLE statement will fail with an execution error.

When a table is dropped, all its triggers and indexes will be deleted as well.

Who Can Drop a Table?

The DROP TABLE statement can be executed by:

  • Administrators

  • The owner of the table

  • Users with the DROP ANY TABLE privilege

Example of DROP TABLE
Dropping the COUNTRY table.
DROP TABLE COUNTRY;

5.4.4. RECREATE TABLE

Used for

Creating a new table (relation) or recreating an existing one

Available in

DSQL

Syntax
RECREATE [GLOBAL TEMPORARY] TABLE tablename
  [EXTERNAL [FILE] 'filespec']
  (<col_def> [, {<col_def> | <tconstraint>} ...])
  [ON COMMIT {DELETE | PRESERVE} ROWS]

See the CREATE TABLE section for the full syntax of CREATE TABLE and descriptions of defining tables, columns and constraints.

RECREATE TABLE creates or recreates a table. If a table with this name already exists, the RECREATE TABLE statement will try to drop it and create a new one. Existing dependencies will prevent the statement from executing.

Example of RECREATE TABLE
Creating or recreating the COUNTRY table.
RECREATE TABLE COUNTRY (
  COUNTRY COUNTRYNAME NOT NULL PRIMARY KEY,
  CURRENCY VARCHAR(10) NOT NULL
);

5.5. INDEX

An index is a database object used for faster data retrieval from a table or for speeding up the sorting in a query. Indexes are used also to enforce the refererential integrity constraints PRIMARY KEY, FOREIGN KEY and UNIQUE.

This section describes how to create indexes, activate and deactivate them, delete them and collect statistics (recalculate selectivity) for them.

5.5.1. CREATE INDEX

Used for

Creating an index for a table

Available in

DSQL, ESQL

Syntax
CREATE [UNIQUE] [ASC[ENDING] | DESC[ENDING]]
  INDEX indexname ON tablename
  {(col [, col …]) | COMPUTED BY (<expression>)}
Table 29. CREATE INDEX Statement Parameters
Parameter Description

indexname

Index name. It may consist of up to 31 characters

tablename

The name of the table for which the index is to be built

col

Name of a column in the table. Columns of the types BLOB and ARRAY and computed fields cannot be used in an index

expression

The expression that will compute the values for a computed index, also known as an “expression index”

The CREATE INDEX statement creates an index for a table that can be used to speed up searching, sorting and grouping. Indexes are created automatically in the process of defining constraints, such as primary key, foreign key or unique constraints.

An index can be built on the content of columns of any data type except for BLOB and arrays. The name (identifier) of an index must be unique among all index names.

Key Indexes

When a primary key, foreign key or unique constraint is added to a table or column, an index with the same name is created automatically, without an explicit directive from the designer. For example, the PK_COUNTRY index will be created automatically when you execute and commit the following statement:

ALTER TABLE COUNTRY ADD CONSTRAINT PK_COUNTRY
  PRIMARY KEY (ID);
Who Can Create an Index?

The CREATE INDEX statement can be executed by:

  • Administrators

  • The owner of the table

  • Users with the ALTER ANY TABLE privilege

Unique Indexes

Specifying the keyword UNIQUE in the index creation statement creates an index in which uniqueness will be enforced throughout the table. The index is referred to as a “unique index”. A unique index is not a constraint.

Unique indexes cannot contain duplicate key values (or duplicate key value combinations, in the case of compound, or multi-column, or multi-segment) indexes. Duplicated NULLs are permitted, in accordance with the SQL:99 standard, in both single-segment and multi-segment indexes.

Index Direction

All indexes in Firebird are uni-directional. An index may be constructed from the lowest value to the highest (ascending order) or from the highest value to the lowest (descending order). The keywords ASC[ENDING] and DESC[ENDING] are used to specify the direction of the index. The default index order is ASC[ENDING]. It is quite valid to define both an ascending and a descending index on the same column or key set.

A descending index can be useful on a column that will be subjected to searches on the high values (“newest”, maximum, etc.)

Firebird uses B-tree indexes, which are bidirectional. However, due to technical limitations, Firebird uses an index in one direction only.

Computed (Expression) Indexes

In creating an index, you can use the COMPUTED BY clause to specify an expression instead of one or more columns. Computed indexes are used in queries where the condition in a WHERE, ORDER BY or GROUP BY clause exactly matches the expression in the index definition. The expression in a computed index may involve several columns in the table.

Expression indexes can also be used as a workaround for indexing computed columns: use the name of the computed column as the expression.

Limits on Indexes

Certain limits apply to indexes.

The maximum length of a key in an index is limited to ¼ of the page size.

Maximum Indexes per Table

The number of indexes that can be accommodated for each table is limited. The actual maximum for a specific table depends on the page size and the number of columns in the indexes.

Table 30. Maximum Indexes per Table

Page Size

Number of Indexes Depending on Column Count

Single

2-Column

3-Column

4096

203

145

113

8192

408

291

227

16384

818

584

454

Character Index Limits

The maximum indexed string length is 9 bytes less than the maximum key length. The maximum indexable string length depends on the page size and the character set.

Table 31. Maximum indexable (VAR)CHAR length

Page Size

Maximum Indexable String Length by Charset Type

1 byte/char

2 byte/char

3 byte/char

4 byte/char

4096

1015

507

338

253

8192

2039

1019

679

509

16384

4087

2043

1362

1021

Examples Using CREATE INDEX
  1. Creating an index for the UPDATER_ID column in the SALARY_HISTORY table

    CREATE INDEX IDX_UPDATER
      ON SALARY_HISTORY (UPDATER_ID);
  2. Creating an index with keys sorted in the descending order for the CHANGE_DATE column in the SALARY_HISTORY table

    CREATE DESCENDING INDEX IDX_CHANGE
      ON SALARY_HISTORY (CHANGE_DATE);
  3. Creating a multi-segment index for the ORDER_STATUS, PAID columns in the SALES table

    CREATE INDEX IDX_SALESTAT
      ON SALES (ORDER_STATUS, PAID);
  4. Creating an index that does not permit duplicate values for the NAME column in the COUNTRY table

    CREATE UNIQUE INDEX UNQ_COUNTRY_NAME
      ON COUNTRY (NAME);
  5. Creating a computed index for the PERSONS table

    CREATE INDEX IDX_NAME_UPPER ON PERSONS
      COMPUTED BY (UPPER (NAME));

    An index like this can be used for a case-insensitive search:

    SELECT *
    FROM PERSONS
    WHERE UPPER(NAME) STARTING WITH UPPER('Iv');

5.5.2. ALTER INDEX

Used for

Activating or deactivating an index; rebuilding an index

Available in

DSQL, ESQL

Syntax
ALTER INDEX indexname {ACTIVE | INACTIVE}
Table 32. ALTER INDEX Statement Parameter
Parameter Description

indexname

Index name

The ALTER INDEX statement activates or deactivates an index. There is no facility on this statement for altering any attributes of the index.

INACTIVE

With the INACTIVE option, the index is switched from the active to inactive state. The effect is similar to the DROP INDEX statement except that the index definition remains in the database. Altering a constraint index to the inactive state is not permitted.

An active index can be deactivated if there are no queries prepared using that index; otherwise, an “object in use” error is returned.

Activating an inactive index is also safe. However, if there are active transactions modifying the table, the transaction containing the ALTER INDEX statement will fail if it has the NOWAIT attribute. If the transaction is in WAIT mode, it will wait for completion of concurrent transactions.

On the other side of the coin, if our ALTER INDEX succeeds and starts to rebuild the index at COMMIT, other transactions modifying that table will fail or wait, according to their WAIT/NO WAIT attributes. The situation is exactly the same for CREATE INDEX.

How is it Useful?

It might be useful to switch an index to the inactive state whilst inserting, updating or deleting a large batch of records in the table that owns the index.

ACTIVE

With the ACTIVE option, if the index is in the inactive state, it will be switched to active state and the system rebuilds the index.

How is it Useful?

Even if the index is active when ALTER INDEX …​ ACTIVE is executed, the index will be rebuilt. Rebuilding indexes can be a useful piece of houskeeping to do, occasionally, on the indexes of a large table in a database that has frequent inserts, updates or deletes but is infrequently restored.

Who Can Alter an Index?

The ALTER INDEX statement can be executed by:

  • Administrators

  • The owner of the table

  • Users with the ALTER ANY TABLE privilege

Use of ALTER INDEX on a Constraint Index

Altering the index of a PRIMARY KEY, FOREIGN KEY or UNIQUE constraint to INACTIVE is not permitted. However, ALTER INDEX …​ ACTIVE works just as well with constraint indexes as it does with others, as an index rebuilding tool.

ALTER INDEX Examples
  1. Deactivating the IDX_UPDATER index

    ALTER INDEX IDX_UPDATER INACTIVE;
  2. Switching the IDX_UPDATER index back to the active state and rebuilding it

    ALTER INDEX IDX_UPDATER ACTIVE;

5.5.3. DROP INDEX

Used for

Dropping (deleting) an index

Available in

DSQL, ESQL

Syntax
DROP INDEX indexname
Table 33. DROP INDEX Statement Parameter
Parameter Description

indexname

Index name

The DROP INDEX statement drops (deletes) the named index from the database.

A constraint index cannot dropped using DROP INDEX. Constraint indexes are dropped during the process of executing the command ALTER TABLE …​ DROP CONSTRAINT …​.

Who Can Drop an Index?

The DROP INDEX statement can be executed by:

  • Administrators

  • The owner of the table

  • Users with the ALTER ANY TABLE privilege

DROP INDEX Example
Dropping the IDX_UPDATER index
DROP INDEX IDX_UPDATER;

5.5.4. SET STATISTICS

Used for

Recalculating the selectivity of an index

Available in

DSQL, ESQL

Syntax
SET STATISTICS INDEX indexname
Table 34. SET STATISTICS Statement Parameter
Parameter Description

indexname

Index name

The SET STATISTICS statement recalculates the selectivity of the specified index.

Who Can Update Index Statistics?

The SET STATISTICS statement can be executed by:

  • Administrators

  • The owner of the table

  • Users with the ALTER ANY TABLE privilege

Index Selectivity

The selectivity of an index is the result of evaluating the number of rows that can be selected in a search on every index value. A unique index has the maximum selectivity because it is impossible to select more than one row for each value of an index key if it is used. Keeping the selectivity of an index up to date is important for the optimizer’s choices in seeking the most optimal query plan.

Index statistics in Firebird are not automatically recalculated in response to large batches of inserts, updates or deletions. It may be beneficial to recalculate the selectivity of an index after such operations because the selectivity tends to become outdated.

The statements CREATE INDEX and ALTER INDEX ACTIVE both store index statistics that completely correspond to the contents of the newly-[re]built index.

It can be performed under concurrent load without risk of corruption. However, be aware that, under concurrent load, the newly calculated statistics could become outdated as soon as SET STATISTICS finishes.

Example Using SET STATISTICS
Recalculating the selectivity of the index IDX_UPDATER
SET STATISTICS INDEX IDX_UPDATER;

5.6. VIEW

A view is a virtual table that is actually a stored and named SELECT query for retrieving data of any complexity. Data can be retrieved from one or more tables, from other views and also from selectable stored procedures.

Unlike regular tables in relational databases, a view is not an independent data set stored in the database. The result is dynamically created as a data set when the view is selected.

The metadata of a view are available to the process that generates the binary code for stored procedures and triggers, just as though they were concrete tables storing persistent data.

5.6.1. CREATE VIEW

Used for

Creating a view

Available in

DSQL

Syntax
CREATE VIEW viewname [<full_column_list>]
  AS <select_statement>
  [WITH CHECK OPTION]

<full_column_list> ::= (colname [, colname ...])
Table 35. CREATE VIEW Statement Parameters
Parameter Description

viewname

View name, maximum 31 characters

select_statement

SELECT statement

full_column_list

The list of columns in the view

colname

View column name. Duplicate column names are not allowed.

The CREATE VIEW statement creates a new view. The identifier (name) of a view must be unique among the names of all views, tables and stored procedures in the database.

The name of the new view can be followed by the list of column names that should be returned to the caller when the view is invoked. Names in the list do not have to be related to the names of the columns in the base tables from which they derive.

If the view column list is omitted, the system will use the column names and/or aliases from the SELECT statement. If duplicate names or non-aliased expression-derived columns make it impossible to obtain a valid list, creation of the view fails with an error.

The number of columns in the view’s list must exactly match the number of columns in the selection list of the underlying SELECT statement in the view definition.

Additional Points
  • If the full list of columns is specified, it makes no sense to specify aliases in the SELECT statement because the names in the column list will override them

  • The column list is optional if all the columns in the SELECT are explicitly named and are unique in the selection list

Updatable Views

A view can be updatable or read-only. If a view is updatable, the data retrieved when this view is called can be changed by the DML statements INSERT, UPDATE, DELETE, UPDATE OR INSERT or MERGE. Changes made in an updatable view are applied to the underlying table(s).

A read-only view can be made updateable with the use of triggers. Once triggers have been defined on a view, changes posted to it will never be written automatically to the underlying table, even if the view was updateable to begin with. It is the responsibility of the programmer to ensure that the triggers update (or delete from, or insert into) the base tables as needed.

A view will be automatically updatable if all the following conditions are met:

  • the SELECT statement queries only one table or one updatable view

  • the SELECT statement does not call any stored procedures

  • each base table (or base view) column not present in the view definition meets one of the following conditions:

    • it is nullable

    • it has a non-NULL default value

    • it has a trigger that supplies a permitted value

  • the SELECT statement contains no fields derived from subqueries or other expressions

  • the SELECT statement does not contain fields defined through aggregate functions (MIN, MAX, AVG, SUM, COUNT, LIST, etc.), statistical functions (CORR, COVAR_POP, COVAR_SAMP, etc.), linear regression functions (REGR_AVGX, REGR_AVGY, etc.) or any type of window function

  • the SELECT statement contains no ORDER BY, GROUP BY or HAVING clause

  • the SELECT statement does not include the keyword DISTINCT or row-restrictive keywords such as ROWS, FIRST, SKIP, OFFSET or FETCH

WITH CHECK OPTION

The optional WITH CHECK OPTION clause requires an updatable view to check whether new or updated data meet the condition specified in the WHERE clause of the SELECT statement. Every attempt to insert a new record or to update an existing one is checked whether the new or updated record would meet the WHERE criteria. If they fail the check, the operation is not performed and an appropriate error message is returned.

WITH CHECK OPTION can be specified only in a CREATE VIEW statement in which a WHERE clause is present to restrict the output of the main SELECT statement. An error message is returned otherwise.

Please note:

If WITH CHECK OPTION is used, the engine checks the input against the WHERE clause before passing anything to the base relation. Therefore, if the check on the input fails, any default clauses or triggers on the base relation that might have been designed to correct the input will never come into action.

Furthermore, view fields omitted from the INSERT statement are passed as NULLs to the base relation, regardless of their presence or absence in the WHERE clause. As a result, base table defaults defined on such fields will not be applied. Triggers, on the other hand, will fire and work as expected.

For views that do not have WITH CHECK OPTION, fields omitted from the INSERT statement are not passed to the base relation at all, so any defaults will be applied.

Who Can Create a View?

The CREATE VIEW statement can be executed by:

The creator of a view becomes its owner.

To create a view, a non-admin user also needs at least SELECT access to the underlying table(s) and/or view(s), and the EXECUTE privilege on any selectable stored procedures involved.

To enable insertions, updates and deletions through the view, the creator/owner must also possess the corresponding INSERT, UPDATE and DELETE rights on the underlying object(s).

Granting other users privileges on the view is only possible if the view owner has these privileges on the underlying objects WITH GRANT OPTION. It will always be the case if the view owner is also the owner of the underlying objects.

Examples of Creating Views
  1. Creating view returning the JOB_CODE and JOB_TITLE columns only for those jobs where MAX_SALARY is less than $15,000.

    CREATE VIEW ENTRY_LEVEL_JOBS AS
    SELECT JOB_CODE, JOB_TITLE
    FROM JOB
    WHERE MAX_SALARY < 15000;
  2. Creating a view returning the JOB_CODE and JOB_TITLE columns only for those jobs where MAX_SALARY is less than $15,000. Whenever a new record is inserted or an existing record is updated, the MAX_SALARY < 15000 condition will be checked. If the condition is not true, the insert/update operation will be rejected.

    CREATE VIEW ENTRY_LEVEL_JOBS AS
    SELECT JOB_CODE, JOB_TITLE
    FROM JOB
    WHERE MAX_SALARY < 15000
    WITH CHECK OPTION;
  3. Creating a view with an explicit column list.

    CREATE VIEW PRICE_WITH_MARKUP (
      CODE_PRICE,
      COST,
      COST_WITH_MARKUP
    ) AS
    SELECT
      CODE_PRICE,
      COST,
      COST * 1.1
    FROM PRICE;
  4. Creating a view with the help of aliases for fields in the SELECT statement (the same result as in Example 3).

    CREATE VIEW PRICE_WITH_MARKUP AS
    SELECT
      CODE_PRICE,
      COST,
      COST * 1.1 AS COST_WITH_MARKUP
    FROM PRICE;
  5. Creating a read-only view based on two tables and a stored procedure.

    CREATE VIEW GOODS_PRICE AS
    SELECT
      goods.name AS goodsname,
      price.cost AS cost,
      b.quantity AS quantity
    FROM
      goods
      JOIN price ON goods.code_goods = price.code_goods
      LEFT JOIN sp_get_balance(goods.code_goods) b ON 1 = 1;

5.6.2. ALTER VIEW

Used for

Modifying an existing view

Available in

DSQL

Syntax
ALTER VIEW viewname [<full_column_list>]
    AS <select_statement>
    [WITH CHECK OPTION]

<full_column_list> ::= (colname [, colname ...])
Table 36. ALTER VIEW Statement Parameters
Parameter Description

viewname

Name of an existing view

select_statement

SELECT statement

full_column_list

The list of columns in the view

colname

View column name. Duplicate column names are not allowed.

Use the ALTER VIEW statement for changing the definition of an existing view. Privileges for views remain intact and dependencies are not affected.

The syntax of the ALTER VIEW statement corresponds completely with that of CREATE VIEW.

Be careful when you change the number of columns in a view. Existing application code and PSQL modules that access the view may become invalid. For information on how to detect this kind of problem in stored procedures and trigger, see The RDB$VALID_BLR Field in the Appendix.

Who Can Alter a View?

The ALTER VIEW statement can be executed by:

  • Administrators

  • The owner of the view

  • Users with the ALTER ANY VIEW privilege

Example using ALTER VIEW
Altering the view PRICE_WITH_MARKUP
ALTER VIEW PRICE_WITH_MARKUP (
  CODE_PRICE,
  COST,
  COST_WITH_MARKUP
) AS
SELECT
  CODE_PRICE,
  COST,
  COST * 1.15
FROM PRICE;

5.6.3. CREATE OR ALTER VIEW

Used for

Creating a new view or altering an existing view.

Available in

DSQL

Syntax
CREATE OR ALTER VIEW viewname [<full_column_list>]
  AS <select_statement>
  [WITH CHECK OPTION]

<full_column_list> ::= (colname [, colname ...])
Table 37. CREATE OR ALTER VIEW Statement Parameters
Parameter Description

viewname

Name of a view which may or may not exist

select_statement

SELECT statement

full_column_list

The list of columns in the view

colname

View column name. Duplicate column names are not allowed.

Use the CREATE OR ALTER VIEW statement for changing the definition of an existing view or creating it if it does not exist. Privileges for an existing view remain intact and dependencies are not affected.

The syntax of the CREATE OR ALTER VIEW statement corresponds completely with that of CREATE VIEW.

Example of CREATE OR ALTER VIEW
Creating the new view PRICE_WITH_MARKUP view or altering it if it already exists
CREATE OR ALTER VIEW PRICE_WITH_MARKUP (
  CODE_PRICE,
  COST,
  COST_WITH_MARKUP
) AS
SELECT
  CODE_PRICE,
  COST,
  COST * 1.15
FROM PRICE;

5.6.4. DROP VIEW

Used for

Deleting (dropping) a view

Available in

DSQL

Syntax
DROP VIEW viewname
Table 38. DROP VIEW Statement Parameter
Parameter Description

viewname

View name

The DROP VIEW statement drops (deletes) an existing view. The statement will fail if the view has dependencies.

Who Can Drop a View?

The DROP VIEW statement can be executed by:

  • Administrators

  • The owner of the view

  • Users with the DROP ANY VIEW privilege

Example
Deleting the PRICE_WITH_MARKUP view
DROP VIEW PRICE_WITH_MARKUP;

5.6.5. RECREATE VIEW

Used for

Creating a new view or recreating an existing view

Available in

DSQL

Syntax
RECREATE VIEW viewname [<full_column_list>]
  AS <select_statement>
  [WITH CHECK OPTION]

<full_column_list> ::= (colname [, colname ...])
Table 39. RECREATE VIEW Statement Parameters
Parameter Description

viewname

View name, maximum 31 characters

select_statement

SELECT statement

full_column_list

The list of columns in the view

colname

View column name. Duplicate column names are not allowed.

Creates or recreates a view. If there is a view with this name already, the engine will try to drop it before creating the new instance. If the existing view cannot be dropped, because of dependencies or insufficient rights, for example, RECREATE VIEW fails with an error.

Example of RECREATE VIEW
Creating the new view PRICE_WITH_MARKUP view or recreating it, if it already exists
RECREATE VIEW PRICE_WITH_MARKUP (
  CODE_PRICE,
  COST,
  COST_WITH_MARKUP
) AS
SELECT
  CODE_PRICE,
  COST,
  COST * 1.15
FROM PRICE;

5.7. TRIGGER

A trigger is a special type of stored procedure that is not called directly, instead being executed when a specified event occurs in the associated table or view. A DML trigger is specific to one and only one relation (table or view) and one phase in the timing of the event (BEFORE or AFTER). It can be specified to execute for one specific event (insert, update, delete) or for some combination of two or three of those events.

Two other forms of trigger exist:

  1. a “database trigger” can be specified to fire at the start or end of a user session (connection) or a user transaction.

  2. a “DDL trigger” can be specified to fire at the before or after execution of one or more types of DDL statements.

5.7.1. CREATE TRIGGER

Used for

Creating a new trigger

Available in

DSQL, ESQL

Syntax
CREATE TRIGGER trigname
  { <relation_trigger_legacy>
  | <relation_trigger_sql2003>
  | <database_trigger>
  | <ddl_trigger> }
  <module-body>

<module-body> ::=
  !! See Syntax of Module Body !!

<relation_trigger_legacy> ::=
  FOR {tablename | viewname}
  [ACTIVE | INACTIVE]
  {BEFORE | AFTER} <mutation_list>
  [POSITION number]

<relation_trigger_sql2003> ::=
  [ACTIVE | INACTIVE]
  {BEFORE | AFTER} <mutation_list>
  [POSITION number]
  ON {tablename | viewname}

<database_trigger> ::=
  [ACTIVE | INACTIVE] ON <db_event>
  [POSITION number]

<ddl_trigger> ::=
  [ACTIVE | INACTIVE]
  {BEFORE | AFTER} <ddl_event>
  [POSITION number]

<mutation_list> ::=
  <mutation> [OR <mutation> [OR <mutation>]]

<mutation> ::= INSERT | UPDATE | DELETE

<db_event> ::=
    CONNECT | DISCONNECT
  | TRANSACTION {START | COMMIT | ROLLBACK}

<ddl_event> ::=
    ANY DDL STATEMENT
  | <ddl_event_item> [{OR <ddl_event_item>} ...]

<ddl_event_item> ::=
    {CREATE | ALTER | DROP} TABLE
  | {CREATE | ALTER | DROP} PROCEDURE
  | {CREATE | ALTER | DROP} FUNCTION
  | {CREATE | ALTER | DROP} TRIGGER
  | {CREATE | ALTER | DROP} EXCEPTION
  | {CREATE | ALTER | DROP} VIEW
  | {CREATE | ALTER | DROP} DOMAIN
  | {CREATE | ALTER | DROP} ROLE
  | {CREATE | ALTER | DROP} SEQUENCE
  | {CREATE | ALTER | DROP} USER
  | {CREATE | ALTER | DROP} INDEX
  | {CREATE | DROP} COLLATION
  | ALTER CHARACTER SET
  | {CREATE | ALTER | DROP} PACKAGE
  | {CREATE | DROP} PACKAGE BODY
  | {CREATE | ALTER | DROP} MAPPING
Table 40. CREATE TRIGGER Statement Parameters
Parameter Description

trigname

Trigger name consisting of up to 31 characters. It must be unique among all trigger names in the database.

relation_trigger_legacy

Legacy style of trigger declaration for a relation trigger

relation_trigger_sql2003

Relation trigger declaration compliant with the SQL:2003 standard

database_trigger

Database trigger declaration

tablename

Name of the table with which the relation trigger is associated

viewname

Name of the view with which the relation trigger is associated

mutation_list

List of relation (table | view) events

number

Position of the trigger in the firing order. From 0 to 32,767

db_event

Connection or transaction event

ddl_event

List of metadata change events

ddl_event_item

One of the metadata change events

The CREATE TRIGGER statement is used for creating a new trigger. A trigger can be created either for a relation (table | view) event (or a combination of events), for a database event, or for a DDL event.

CREATE TRIGGER, along with its associates ALTER TRIGGER, CREATE OR ALTER TRIGGER and RECREATE TRIGGER, is a compound statement, consisting of a header and a body. The header specifies the name of the trigger, the name of the relation (for a DML trigger), the phase of the trigger, the event(s) it applies to, and the position to determine an order between triggers.

The trigger body consists of optional declarations of local variables and named cursors followed by one or more statements, or blocks of statements, all enclosed in an outer block that begins with the keyword BEGIN and ends with the keyword END. Declarations and embedded statements are terminated with semi-colons (‘;’).

The name of the trigger must be unique among all trigger names.

Statement Terminators

Some SQL statement editors — specifically the isql utility that comes with Firebird and possibly some third-party editors — employ an internal convention that requires all statements to be terminated with a semi-colon. This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the Terminator in isql.

External UDR Triggers

A trigger can also be located in an external module. In this case, instead of a trigger body, the CREATE TRIGGER specifies the location of the trigger in the external module using the EXTERNAL clause. The optional NAME clause specifies the name of the external module, the name of the trigger inside the module, and — optionally — user-defined information. The required ENGINE clause specifies the name of the UDR engine that handles communication between Firebird and the external module. The optional AS clause accepts a string literal “body”, which can be used by the engine or module for various purposes.

DML Triggers (on Tables or Views)

DML — or “relation” — triggers are executed at the row (record) level, every time the row image changes. A trigger can be either ACTIVE or INACTIVE. Only active triggers are executed. Triggers are created ACTIVE by default.

Who Can Create a DML Trigger?

DML triggers can be created by:

  • Administrators

  • The owner of the table (or view)

  • Users with the ALTER ANY TABLE or — for a view — ALTER ANY VIEW privilege

Forms of Declaration

Firebird supports two forms of declaration for relation triggers:

  • The original, legacy syntax

  • The SQL:2003 standard-compliant form (recommended)

The SQL:2003 standard-compliant form is the recommended one.

A relation trigger specifies — among other things — a phase and one or more events.

Phase

Phase concerns the timing of the trigger with regard to the change-of-state event in the row of data:

  • A BEFORE trigger is fired before the specified database operation (insert, update or delete) is carried out

  • An AFTER trigger is fired after the database operation has been completed

Row Events

A relation trigger definition specifies at least one of the DML operations INSERT, UPDATE and DELETE, to indicate one or more events on which the trigger should fire. If multiple operations are specified, they must be separated by the keyword OR. No operation may occur more than once.

Within the statement block, the Boolean context variables INSERTING, UPDATING and DELETING can be used to test which operation is currently executing.

Firing Order of Triggers

The keyword POSITION allows an optional execution order (“firing order”) to be specified for a series of triggers that have the same phase and event as their target. The default position is 0. If no positions are specified, or if several triggers have a single position number, the triggers will be executed in the alphabetical order of their names.

Variable Declarations

The optional declarations section beneath the keyword AS in the header of the trigger is for defining variables and named cursors that are local to the trigger. For more details, see DECLARE VARIABLE and DECLARE CURSOR in the Procedural SQL chapter.

The Trigger Body

The local declarations (if any) are the final part of a trigger’s header section. The trigger body follows, where one or more blocks of PSQL statements are enclosed in a structure that starts with the keyword BEGIN and terminates with the keyword END.

Only the owner of the view or table and administrators have the authority to use CREATE TRIGGER.

Examples of CREATE TRIGGER for Tables and Views
  1. Creating a trigger in the “legacy” form, firing before the event of inserting a new record into the CUSTOMER table occurs.

    CREATE TRIGGER SET_CUST_NO FOR CUSTOMER
    ACTIVE BEFORE INSERT POSITION 0
    AS
    BEGIN
      IF (NEW.CUST_NO IS NULL) THEN
        NEW.CUST_NO = GEN_ID(CUST_NO_GEN, 1);
    END
  2. Creating a trigger firing before the event of inserting a new record into the CUSTOMER table in the SQL:2003 standard-compliant form.

    CREATE TRIGGER set_cust_no
    ACTIVE BEFORE INSERT POSITION 0 ON customer
    AS
    BEGIN
      IF (NEW.cust_no IS NULL) THEN
        NEW.cust_no = GEN_ID(cust_no_gen, 1);
    END
  3. Creating a trigger that will file after either inserting, updating or deleting a record in the CUSTOMER table.

    CREATE TRIGGER TR_CUST_LOG
    ACTIVE AFTER INSERT OR UPDATE OR DELETE POSITION 10
    ON CUSTOMER
    AS
    BEGIN
      INSERT INTO CHANGE_LOG (LOG_ID,
                              ID_TABLE,
                              TABLE_NAME,
                              MUTATION)
      VALUES (NEXT VALUE FOR SEQ_CHANGE_LOG,
              OLD.CUST_NO,
              'CUSTOMER',
              CASE
                WHEN INSERTING THEN 'INSERT'
                WHEN UPDATING  THEN 'UPDATE'
                WHEN DELETING  THEN 'DELETE'
              END);
    END
Database Triggers

Triggers can be defined to fire upon “database events”, which really refers to a mixture of events that act across the scope of a session (connection) and events that act across the scope of an individual transaction:

  • CONNECT

  • DISCONNECT

  • TRANSACTION START

  • TRANSACTION COMMIT

  • TRANSACTION ROLLBACK

DDL Triggers are a sub-type of database triggers, covered in a separate section.

Who Can Create a Database Trigger?

Database triggers can be created by:

Execution of Database Triggers and Exception Handling

CONNECT and DISCONNECT triggers are executed in a transaction created specifically for this purpose. This transaction uses the default isolation level, i.e. snapshot (concurrency), write and wait. If all goes well, the transaction is committed. Uncaught exceptions cause the transaction to roll back, and

  • for a CONNECT trigger, the connection is then broken and the exception is returned to the client

  • for a DISCONNECT trigger, exceptions are not reported. The connection is broken as intended

TRANSACTION triggers are executed within the transaction whose start, commit or rollback evokes them. The action taken after an uncaught exception depends on the event:

  • In a TRANSACTION START trigger, the exception is reported to the client and the transaction is rolled back

  • In a TRANSACTION COMMIT trigger, the exception is reported, the trigger’s actions so far are undone and the commit is cancelled

  • In a TRANSACTION ROLLBACK trigger, the exception is not reported and the transaction is rolled back as intended.

Traps

Obviously there is no direct way of knowing if a DISCONNECT or TRANSACTION ROLLBACK trigger caused an exception. It also follows that the connection to the database cannot happen if a CONNECT trigger causes an exception and a transaction cannot start if a TRANSACTION START trigger causes one, either. Both phenomena effectively lock you out of your database until you get in there with database triggers suppressed and fix the bad code.

Suppressing Database Triggers

Some Firebird command-line tools have been supplied with switches that an administrator can use to suppress the automatic firing of database triggers. So far, they are:

gbak -nodbtriggers
isql -nodbtriggers
nbackup -T
Two-phase Commit

In a two-phase commit scenario, TRANSACTION COMMIT triggers fire in the prepare phase, not at the commit.

Some Caveats
  1. The use of the IN AUTONOMOUS TRANSACTION DO statement in the database event triggers related to transactions (TRANSACTION START, TRANSACTION ROLLBACK, TRANSACTION COMMIT) may cause the autonomous transaction to enter an infinite loop

  2. The DISCONNECT and TRANSACTION ROLLBACK event triggers will not be executed when clients are disconnected via monitoring tables (DELETE FROM MON$ATTACHMENTS)

Only the database owner and administrators have the authority to create database triggers.

Examples of CREATE TRIGGER for “Database Triggers”
  1. Creating a trigger for the event of connecting to the database that logs users logging into the system. The trigger is created as inactive.

    CREATE TRIGGER tr_log_connect
    INACTIVE ON CONNECT POSITION 0
    AS
    BEGIN
      INSERT INTO LOG_CONNECT (ID,
                               USERNAME,
                               ATIME)
      VALUES (NEXT VALUE FOR SEQ_LOG_CONNECT,
              CURRENT_USER,
              CURRENT_TIMESTAMP);
    END
  2. Creating a trigger for the event of connecting to the database that does not permit any users, except for SYSDBA, to log in during off hours.

    CREATE EXCEPTION E_INCORRECT_WORKTIME 'The working day has not started yet.';
    
    CREATE TRIGGER TR_LIMIT_WORKTIME ACTIVE
    ON CONNECT POSITION 1
    AS
    BEGIN
      IF ((CURRENT_USER <> 'SYSDBA') AND
          NOT (CURRENT_TIME BETWEEN time '9:00' AND time '17:00')) THEN
        EXCEPTION E_INCORRECT_WORKTIME;
    END
DDL Triggers

DDL triggers allow restrictions to be placed on users who attempt to create, alter or drop a DDL object. Their other purposes is to keep a metadata change log.

DDL triggers fire on specified metadata changes events in a specified phase. BEFORE triggers run before changes to system tables. AFTER triggers run after changes in system tables.

The event type [BEFORE | AFTER] of a DDL trigger cannot be changed.

In some sense, DDL triggers are a sub-type of database triggers.

Who Can Create a DDL Trigger?

DDL triggers can be created by:

Suppressing DDL Triggers

A DDL trigger is a type of database trigger. See Suppressing Database Triggers how to suppress database — and DDL — triggers.

Examples of DDL Triggers
  1. Here is how you might use a DDL trigger to enforce a consistent naming scheme, in this case, stored procedure names should begin with the prefix “SP_”:

    set auto on;
    create exception e_invalid_sp_name 'Invalid SP name (should start with SP_)';
    
    set term !;
    
    create trigger trig_ddl_sp before CREATE PROCEDURE
    as
    begin
      if (rdb$get_context('DDL_TRIGGER', 'OBJECT_NAME') not starting 'SP_') then
        exception e_invalid_sp_name;
    end!

    Test

    create procedure sp_test
    as
    begin
    end!
    
    create procedure test
    as
    begin
    end!
    
    -- The last command raises this exception and procedure TEST is not created
    -- Statement failed, SQLSTATE = 42000
    -- exception 1
    -- -E_INVALID_SP_NAME
    -- -Invalid SP name (should start with SP_)
    -- -At trigger 'TRIG_DDL_SP' line: 4, col: 5
    
    set term ;!
  2. Implement custom DDL security, in this case restricting the running of DDL commands to certain users:

    create exception e_access_denied 'Access denied';
    
    set term !;
    
    create trigger trig_ddl before any ddl statement
    as
    begin
      if (current_user <> 'SUPER_USER') then
        exception e_access_denied;
    end!

    Test

    create procedure sp_test
    as
    begin
    end!
    
    -- The last command raises this exception and procedure SP_TEST is not created
    -- Statement failed, SQLSTATE = 42000
    -- exception 1
    -- -E_ACCESS_DENIED
    -- -Access denied
    -- -At trigger 'TRIG_DDL' line: 4, col: 5
    
    set term ;!

    Firebird has privileges for executing DDL statements, so writing a DDL trigger for this should be a last resort, if the same effect cannot be achieved using privileges.

  3. Use a trigger to log DDL actions and attempts:

    create sequence ddl_seq;
    
    create table ddl_log (
      id bigint not null primary key,
      moment timestamp not null,
      user_name varchar(31) not null,
      event_type varchar(25) not null,
      object_type varchar(25) not null,
      ddl_event varchar(25) not null,
      object_name varchar(31) not null,
      sql_text blob sub_type text not null,
      ok char(1) not null
    );
    
    set term !;
    
    create trigger trig_ddl_log_before before any ddl statement
    as
      declare id type of column ddl_log.id;
    begin
      -- We do the changes in an AUTONOMOUS TRANSACTION, so if an exception happens
      -- and the command didn't run, the log will survive.
      in autonomous transaction do
      begin
        insert into ddl_log (id, moment, user_name, event_type, object_type,
                             ddl_event, object_name, sql_text, ok)
          values (next value for ddl_seq, current_timestamp, current_user,
                  rdb$get_context('DDL_TRIGGER', 'EVENT_TYPE'),
                  rdb$get_context('DDL_TRIGGER', 'OBJECT_TYPE'),
                  rdb$get_context('DDL_TRIGGER', 'DDL_EVENT'),
                  rdb$get_context('DDL_TRIGGER', 'OBJECT_NAME'),
                  rdb$get_context('DDL_TRIGGER', 'SQL_TEXT'),
                  'N')
          returning id into id;
        rdb$set_context('USER_SESSION', 'trig_ddl_log_id', id);
      end
    end!

    The above trigger will fire for this DDL command. It’s a good idea to use -nodbtriggers when working with them!

    create trigger trig_ddl_log_after after any ddl statement
    as
    begin
      -- Here we need an AUTONOMOUS TRANSACTION because the original transaction
      -- will not see the record inserted on the BEFORE trigger autonomous
      -- transaction if user transaction is not READ COMMITTED.
      in autonomous transaction do
         update ddl_log set ok = 'Y'
         where id = rdb$get_context('USER_SESSION', 'trig_ddl_log_id');
    end!
    
    commit!
    
    set term ;!
    
    -- Delete the record about trig_ddl_log_after creation.
    delete from ddl_log;
    commit;

    Test

    -- This will be logged one time
    -- (as T1 did not exist, RECREATE acts as CREATE) with OK = Y.
    recreate table t1 (
      n1 integer,
      n2 integer
    );
    
    -- This will fail as T1 already exists, so OK will be N.
    create table t1 (
      n1 integer,
      n2 integer
    );
    
    -- T2 does not exist. There will be no log.
    drop table t2;
    
    -- This will be logged twice
    -- (as T1 exists, RECREATE acts as DROP and CREATE) with OK = Y.
    recreate table t1 (
      n integer
    );
    
    commit;
    select id, ddl_event, object_name, sql_text, ok
      from ddl_log order by id;
    
     ID DDL_EVENT                 OBJECT_NAME                      SQL_TEXT OK
    === ========================= ======================= ================= ======
      2 CREATE TABLE              T1                                   80:3 Y
    ====================================================
    SQL_TEXT:
    recreate table t1 (
        n1 integer,
        n2 integer
    )
    ====================================================
      3 CREATE TABLE              T1                                   80:2 N
    ====================================================
    SQL_TEXT:
    create table t1 (
        n1 integer,
        n2 integer
    )
    ====================================================
      4 DROP TABLE                T1                                   80:6 Y
    ====================================================
    SQL_TEXT:
    recreate table t1 (
        n integer
    )
    ====================================================
      5 CREATE TABLE              T1                                   80:9 Y
    ====================================================
    SQL_TEXT:
    recreate table t1 (
        n integer
    )
    ====================================================
See also

ALTER TRIGGER, CREATE OR ALTER TRIGGER, RECREATE TRIGGER, DROP TRIGGER, DDL Triggers in Chapter Procedural SQL (PSQL) Statements

5.7.2. ALTER TRIGGER

Used for

Modifying and deactivating an existing trigger

Available in

DSQL, ESQL

Syntax
ALTER TRIGGER trigname
  [ACTIVE | INACTIVE]
  [{BEFORE | AFTER} <mutation_list>]
  [POSITION number]
  [<module-body>]

!! See syntax of CREATE TRIGGER for further rules !!

The ALTER TRIGGER statement only allows certain changes to the header and body of a trigger.

Permitted Changes to Triggers
  • Status (ACTIVE | INACTIVE)

  • Phase (BEFORE | AFTER) (of DML triggers)

  • Events (of DML triggers)

  • Position in the firing order

  • Modifications to code in the trigger body

If an element is not specified, it remains unchanged.

A DML trigger cannot be changed to a database (or DDL) trigger.

It is not possible to change the event(s) or phase of a database (or DDL) trigger.

Reminders

The BEFORE keyword directs that the trigger be executed before the associated event occurs; the AFTER keyword directs that it be executed after the event.

More than one DML event — INSERT, UPDATE, DELETE — can be covered in a single trigger. The events should be separated with the keyword OR. No event should be mentioned more than once.

The keyword POSITION allows an optional execution order (“firing order”) to be specified for a series of triggers that have the same phase and event as their target. The default position is 0. If no positions are specified, or if several triggers have a single position number, the triggers will be executed in the alphabetical order of their names.

Who Can Alter a Trigger?

DML triggers can be altered by:

  • Administrators

  • The owner of the table (or view)

  • Users with the ALTER ANY TABLE or — for a view — ALTER ANY VIEW privilege

Database and DDL triggers can be altered by:

Examples using ALTER TRIGGER
  1. Deactivating the set_cust_no trigger (switching it to the inactive status).

    ALTER TRIGGER set_cust_no INACTIVE;
  2. Changing the firing order position of the set_cust_no trigger.

    ALTER TRIGGER set_cust_no POSITION 14;
  3. Switching the TR_CUST_LOG trigger to the inactive status and modifying the list of events.

    ALTER TRIGGER TR_CUST_LOG
    INACTIVE AFTER INSERT OR UPDATE;
  4. Switching the tr_log_connect trigger to the active status, changing its position and body.

    ALTER TRIGGER tr_log_connect
    ACTIVE POSITION 1
    AS
    BEGIN
      INSERT INTO LOG_CONNECT (ID,
                               USERNAME,
                               ROLENAME,
                               ATIME)
      VALUES (NEXT VALUE FOR SEQ_LOG_CONNECT,
              CURRENT_USER,
              CURRENT_ROLE,
              CURRENT_TIMESTAMP);
    END

5.7.3. CREATE OR ALTER TRIGGER

Used for

Creating a new trigger or altering an existing trigger

Available in

DSQL

Syntax
CREATE OR ALTER TRIGGER trigname
  { <relation_trigger_legacy>
  | <relation_trigger_sql2003>
  | <database_trigger>
  | <ddl_trigger> }
  <module-body>

!! See syntax of CREATE TRIGGER for further rules !!

The CREATE OR ALTER TRIGGER statement creates a new trigger if it does not exist; otherwise it alters and recompiles it with the privileges intact and dependencies unaffected.

Example of CREATE OR ALTER TRIGGER
Creating a new trigger if it does not exist or altering it if it does exist
CREATE OR ALTER TRIGGER set_cust_no
ACTIVE BEFORE INSERT POSITION 0 ON customer
AS
BEGIN
  IF (NEW.cust_no IS NULL) THEN
    NEW.cust_no = GEN_ID(cust_no_gen, 1);
END

5.7.4. DROP TRIGGER

Used for

Dropping (deleting) an existing trigger

Available in

DSQL, ESQL

Syntax
DROP TRIGGER trigname
Table 41. DROP TRIGGER Statement Parameter
Parameter Description

trigname

Trigger name

The DROP TRIGGER statement drops (deletes) an existing trigger.

Who Can Drop a Trigger?

DML triggers can be dropped by:

  • Administrators

  • The owner of the table (or view)

  • Users with the ALTER ANY TABLE or — for a view — ALTER ANY VIEW privilege

Database and DDL triggers can be dropped by:

Example of DROP TRIGGER
Deleting the set_cust_no trigger
DROP TRIGGER set_cust_no;

5.7.5. RECREATE TRIGGER

Used for

Creating a new trigger or recreating an existing trigger

Available in

DSQL

Syntax
RECREATE TRIGGER trigname
  { <relation_trigger_legacy>
  | <relation_trigger_sql2003>
  | <database_trigger>
  | <ddl_trigger> }
  <module-body>

!! See syntax of CREATE TRIGGER for further rules !!

The RECREATE TRIGGER statement creates a new trigger if no trigger with the specified name exists; otherwise the RECREATE TRIGGER statement tries to drop the existing trigger and create a new one. The operation will fail on COMMIT if the trigger is in use.

Be aware that dependency errors are not detected until the COMMIT phase of this operation.

Example of RECREATE TRIGGER

Creating or recreating the set_cust_no trigger.

RECREATE TRIGGER set_cust_no
ACTIVE BEFORE INSERT POSITION 0 ON customer
AS
BEGIN
  IF (NEW.cust_no IS NULL) THEN
    NEW.cust_no = GEN_ID(cust_no_gen, 1);
END

5.8. PROCEDURE

A stored procedure is a software module that can be called from a client, another procedure, function, executable block or trigger. Stored procedures, stored functions, executable blocks and triggers are written in procedural SQL (PSQL). Most SQL statements are available in PSQL as well, sometimes with some limitations or extensions, notable limitations are DDL and transaction control statements.

Stored procedures can have many input and output parameters.

5.8.1. CREATE PROCEDURE

Used for

Creating a new stored procedure

Available in

DSQL, ESQL

Syntax
CREATE PROCEDURE procname [ ( [ <in_params> ] ) ]
  [RETURNS (<out_params>)]
  <module-body>

<module-body> ::=
  !! See Syntax of Module Body !!

<in_params> ::= <inparam> [, <inparam> ...]

<inparam> ::= <param_decl> [{= | DEFAULT} <value>]

<out_params> ::= <outparam> [, <outparam> ...]

<outparam> ::= <param_decl>

<value> ::= {<literal> | NULL | <context_var>}

<param_decl> ::= paramname <domain_or_non_array_type> [NOT NULL]
  [COLLATE collation]

<type> ::=
    <datatype>
  | [TYPE OF] domain
  | TYPE OF COLUMN rel.col

<domain_or_non_array_type> ::=
  !! See Scalar Data Types Syntax !!
Table 42. CREATE PROCEDURE Statement Parameters
Parameter Description

procname

Stored procedure name consisting of up to 31 characters. Must be unique among all table, view and procedure names in the database

inparam

Input parameter description

outparam

Output parameter description

literal

A literal value that is assignment-compatible with the data type of the parameter

context_var

Any context variable whose type is compatible with the data type of the parameter

paramname

The name of an input or output parameter of the procedure. It may consist of up to 31 characters. The name of the parameter must be unique among input and output parameters of the procedure and its local variables

collation

Collation sequence

The CREATE PROCEDURE statement creates a new stored procedure. The name of the procedure must be unique among the names of all stored procedures, tables and views in the database.

CREATE PROCEDURE is a compound statement, consisting of a header and a body. The header specifies the name of the procedure and declares input parameters and the output parameters, if any, that are to be returned by the procedure.

The procedure body consists of declarations for any local variables and named cursors that will be used by the procedure, followed by one or more statements, or blocks of statements, all enclosed in an outer block that begins with the keyword BEGIN and ends with the keyword END. Declarations and embedded statements are terminated with semi-colons (‘;’).

Statement Terminators

Some SQL statement editors — specifically the isql utility that comes with Firebird and possibly some third-party editors — employ an internal convention that requires all statements to be terminated with a semi-colon. This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the Terminator in isql.

Parameters

Each parameter has a data type. The NOT NULL constraint can also be specified for any parameter, to prevent NULL being passed or assigned to it.

A collation sequence can be specified for string-type parameters, using the COLLATE clause.

Input Parameters

Input parameters are presented as a parenthesized list following the name of the function. They are passed by value into the procedure, so any changes inside the procedure has no effect on the parameters in the caller. Input parameters may have default values. Parameters with default values specified must be added at the end of the list of parameters.

Output Parameters

The optional RETURNS clause is for specifying a parenthesised list of output parameters for the stored procedure.

Variable, Cursor and Sub-Routine Declarations

The optional declarations section, located at the start of the body of the procedure definition, defines variables (including cursors) and sub-routines local to the procedure. Local variable declarations follow the same rules as parameters regarding specification of the data type. See details in the PSQL chapter for DECLARE VARIABLE, DECLARE CURSOR, DECLARE FUNCTION, and DECLARE PROCEDURE.

External UDR Procedures

A stored procedure can also be located in an external module. In this case, instead of a procedure body, the CREATE PROCEDURE specifies the location of the procedure in the external module using the EXTERNAL clause. The optional NAME clause specifies the name of the external module, the name of the procedure inside the module, and — optionally — user-defined information. The required ENGINE clause specifies the name of the UDR engine that handles communication between Firebird and the external module. The optional AS clause accepts a string literal “body”, which can be used by the engine or module for various purposes.

Who Can Create a Procedure

The CREATE PROCEDURE statement can be executed by:

The user executing the CREATE PROCEDURE statement becomes the owner of the table.

Examples
  1. Creating a stored procedure that inserts a record into the BREED table and returns the code of the inserted record:

    CREATE PROCEDURE ADD_BREED (
      NAME D_BREEDNAME, /* Domain attributes are inherited */
      NAME_EN TYPE OF D_BREEDNAME, /* Only the domain type is inherited */
      SHORTNAME TYPE OF COLUMN BREED.SHORTNAME,
        /* The table column type is inherited */
      REMARK VARCHAR(120) CHARACTER SET WIN1251 COLLATE PXW_CYRL,
      CODE_ANIMAL INT NOT NULL DEFAULT 1
    )
    RETURNS (
      CODE_BREED INT
    )
    AS
    BEGIN
      INSERT INTO BREED (
        CODE_ANIMAL, NAME, NAME_EN, SHORTNAME, REMARK)
      VALUES (
        :CODE_ANIMAL, :NAME, :NAME_EN, :SHORTNAME, :REMARK)
      RETURNING CODE_BREED INTO CODE_BREED;
    END
  2. Creating a selectable stored procedure that generates data for mailing labels (from employee.fdb):

    CREATE PROCEDURE mail_label (cust_no INTEGER)
    RETURNS (line1 CHAR(40), line2 CHAR(40), line3 CHAR(40),
             line4 CHAR(40), line5 CHAR(40), line6 CHAR(40))
    AS
      DECLARE VARIABLE customer VARCHAR(25);
      DECLARE VARIABLE first_name VARCHAR(15);
      DECLARE VARIABLE last_name VARCHAR(20);
      DECLARE VARIABLE addr1 VARCHAR(30);
      DECLARE VARIABLE addr2 VARCHAR(30);
      DECLARE VARIABLE city VARCHAR(25);
      DECLARE VARIABLE state VARCHAR(15);
      DECLARE VARIABLE country VARCHAR(15);
      DECLARE VARIABLE postcode VARCHAR(12);
      DECLARE VARIABLE cnt INTEGER;
    BEGIN
      line1 = '';
      line2 = '';
      line3 = '';
      line4 = '';
      line5 = '';
      line6 = '';
    
      SELECT customer, contact_first, contact_last, address_line1,
        address_line2, city, state_province, country, postal_code
      FROM CUSTOMER
      WHERE cust_no = :cust_no
      INTO :customer, :first_name, :last_name, :addr1, :addr2,
        :city, :state, :country, :postcode;
    
      IF (customer IS NOT NULL) THEN
        line1 = customer;
      IF (first_name IS NOT NULL) THEN
        line2 = first_name || ' ' || last_name;
      ELSE
        line2 = last_name;
      IF (addr1 IS NOT NULL) THEN
        line3 = addr1;
      IF (addr2 IS NOT NULL) THEN
        line4 = addr2;
    
      IF (country = 'USA') THEN
      BEGIN
        IF (city IS NOT NULL) THEN
      	  line5 = city || ', ' || state || '  ' || postcode;
      	ELSE
          line5 = state || '  ' || postcode;
      END
      ELSE
      BEGIN
        IF (city IS NOT NULL) THEN
      	  line5 = city || ', ' || state;
      	ELSE
          line5 = state;
        line6 = country || '    ' || postcode;
      END
    
      SUSPEND; -- the statement that sends an output row to the buffer
               -- and makes the procedure "selectable"
    END

5.8.2. ALTER PROCEDURE

Used for

Modifying an existing stored procedure

Available in

DSQL, ESQL

Syntax
ALTER PROCEDURE procname [ ( [ <in_params> ] ) ]
  [RETURNS (<out_params>)]
  <module-body>

!! See syntax of CREATE PROCEDURE for further rules !!

The ALTER PROCEDURE statement allows the following changes to a stored procedure definition:

  • the set and characteristics of input and output parameters

  • local variables

  • code in the body of the stored procedure

After ALTER PROCEDURE executes, existing privileges remain intact and dependencies are not affected.

Take care about changing the number and type of input and output parameters in stored procedures. Existing application code and procedures and triggers that call it could become invalid because the new description of the parameters is incompatible with the old calling format. For information on how to troubleshoot such a situation, see the article The RDB$VALID_BLR Field in the Appendix.

Who Can Alter a Procedure

The ALTER PROCEDURE statement can be executed by:

  • Administrators

  • The owner of the stored procedure

  • Users with the ALTER ANY PROCEDURE privilege

ALTER PROCEDURE Example
Altering the GET_EMP_PROJ stored procedure.
ALTER PROCEDURE GET_EMP_PROJ (
  EMP_NO SMALLINT)
RETURNS (
  PROJ_ID VARCHAR(20))
AS
BEGIN
  FOR SELECT
      PROJ_ID
    FROM
      EMPLOYEE_PROJECT
    WHERE
      EMP_NO = :emp_no
    INTO :proj_id
  DO
    SUSPEND;
END

5.8.3. CREATE OR ALTER PROCEDURE

Used for

Creating a new stored procedure or altering an existing one

Available in

DSQL

Syntax
CREATE OR ALTER PROCEDURE procname [ ( [ <in_params> ] ) ]
  [RETURNS (<out_params>)]
  <module-body>

!! See syntax of CREATE PROCEDURE for further rules !!

The CREATE OR ALTER PROCEDURE statement creates a new stored procedure or alters an existing one. If the stored procedure does not exist, it will be created by invoking a CREATE PROCEDURE statement transparently. If the procedure already exists, it will be altered and compiled without affecting its existing privileges and dependencies.

CREATE OR ALTER PROCEDURE Example
Creating or altering the GET_EMP_PROJ procedure.
CREATE OR ALTER PROCEDURE GET_EMP_PROJ (
    EMP_NO SMALLINT)
RETURNS (
    PROJ_ID VARCHAR(20))
AS
BEGIN
  FOR SELECT
      PROJ_ID
    FROM
      EMPLOYEE_PROJECT
    WHERE
      EMP_NO = :emp_no
    INTO :proj_id
  DO
    SUSPEND;
END

5.8.4. DROP PROCEDURE

Used for

Deleting a stored procedure

Available in

DSQL, ESQL

Syntax
DROP PROCEDURE procname
Table 43. DROP PROCEDURE Statement Parameter
Parameter Description

procname

Name of an existing stored procedure

The DROP PROCEDURE statement deletes an existing stored procedure. If the stored procedure has any dependencies, the attempt to delete it will fail and the appropriate error will be raised.

Who Can Drop a Procedure

The ALTER PROCEDURE statement can be executed by:

  • Administrators

  • The owner of the stored procedure

  • Users with the DROP ANY PROCEDURE privilege

DROP PROCEDURE Example
Deleting the GET_EMP_PROJ stored procedure.
DROP PROCEDURE GET_EMP_PROJ;

5.8.5. RECREATE PROCEDURE

Used for

Creating a new stored procedure or recreating an existing one

Available in

DSQL

Syntax
RECREATE PROCEDURE procname [ ( [ <in_params> ] ) ]
  [RETURNS (<out_params>)]
  <module-body>

!! See syntax of CREATE PROCEDURE for further rules !!

The RECREATE PROCEDURE statement creates a new stored procedure or recreates an existing one. If there is a procedure with this name already, the engine will try to delete it and create a new one. Recreating an existing procedure will fail at the COMMIT request if the procedure has dependencies.

Be aware that dependency errors are not detected until the COMMIT phase of this operation.

After a procedure is successfully recreated, privileges to execute the stored procedure, and the privileges of the stored procedure itself are dropped.

RECREATE PROCEDURE Example
Creating the new GET_EMP_PROJ stored procedure or recreating the existing GET_EMP_PROJ stored procedure.
RECREATE PROCEDURE GET_EMP_PROJ (
  EMP_NO SMALLINT)
RETURNS (
  PROJ_ID VARCHAR(20))
AS
BEGIN
  FOR SELECT
      PROJ_ID
    FROM
      EMPLOYEE_PROJECT
    WHERE
      EMP_NO = :emp_no
    INTO :proj_id
  DO
    SUSPEND;
END

5.9. FUNCTION

A stored function is a user-defined function stored in the metadata of a database, and running on the server. Stored functions can be called by stored procedures, stored functions (including the function itself), triggers and client programs. When a stored function calls itself, such a stored function is called a recursive function.

Unlike stored procedures, stored functions always return a single scalar value. To return a value from a stored functions, use the RETURN statement, which immediately ends the function.

5.9.1. CREATE FUNCTION

Used for

Creating a new stored function

Available in

DSQL

Syntax
CREATE FUNCTION funcname [ ( [ <in_params> ] ) ]
  RETURNS <domain_or_non_array_type> [COLLATE collation]
  [DETERMINISTIC]
  <module-body>

<module-body> ::=
  !! See Syntax of Module Body !!

<in_params> ::= <inparam> [, <inparam> ... ]

<inparam> ::= <param-decl> [ { = | DEFAULT } <value> ]

<value> ::= { <literal> | NULL | <context-var> }

<param-decl> ::= paramname <domain_or_non_array_type> [NOT NULL]
  [COLLATE collation]

<domain_or_non_array_type> ::=
  !! See Scalar Data Types Syntax !!
Table 44. CREATE FUNCTION Statement Parameters
Parameter Description

funcname

Stored function name consisting of up to 31 characters. Must be unique among all function names in the database.

inparam

Input parameter description

collation

Collation sequence

literal

A literal value that is assignment-compatible with the data type of the parameter

context-var

Any context variable whose type is compatible with the data type of the parameter

paramname

The name of an input parameter of the function. It may consist of up to 31 characters. The name of the parameter must be unique among input parameters of the function and its local variables.

The CREATE FUNCTION statement creates a new stored function. The stored function name must be unique among the names of all stored and external (legacy) functions, excluding sub-functions or functions in packages. For sub-functions or functions in packages, the name must be unique within its module (package, stored procedure, stored function, trigger).

It is advisable to not reuse function names between global stored functions and stored functions in packages, although this is legal. At the moment, it is not possible to call a function or procedure from the global namespace from inside a package, if that package defines a function or procedure with the same name. In that situation, the function or procedure of the package will be called.

CREATE FUNCTION is a compound statement with a header and a body. The header defines the name of the stored function, and declares input parameters and return type.

The function body consists of optional declarations of local variables, named cursors, and subroutines (sub-functions and sub-procedures), and one or more statements or statement blocks, enclosed in an outer block that starts with the keyword BEGIN and ends with the keyword END. Declarations and statements inside the function body must be terminated with a semicolon (‘;’).

Statement Terminators

Some SQL statement editors — specifically the isql utility that comes with Firebird and possibly some third-party editors — employ an internal convention that requires all statements to be terminated with a semi-colon. This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the Terminator in isql.

Parameters

Each parameter has a data type.

A collation sequence can be specified for string-type parameters, using the COLLATE clause.

Input Parameters

Input parameters are presented as a parenthesized list following the name of the function. They are passed by value into the function, so any changes inside the function has no effect on the parameters in the caller. The NOT NULL constraint can also be specified for any input parameter, to prevent NULL being passed or assigned to it. Input parameters may have default values. Parameters with default values specified must be added at the end of the list of parameters.

Output Parameter

The RETURNS clause specifies the return type of the stored function. If a function returns a string value, then it is possible to specify the collation using the COLLATE clause. As a return type, you can specify a data type, a domain name, the type of a domain (using TYPE OF), or the type of a column of a table or view (using TYPE OF COLUMN).

Deterministic functions

The optional DETERMINISTIC clause indicates that the function is deterministic. Deterministic functions always return the same result for the same set of inputs. Non-deterministic functions can return different results for each invocation, even for the same set of inputs. If a function is specified as deterministic, then such a function might not be called again if it has already been called once with the given set of inputs, and instead takes the result from a metadata cache.

Current versions of Firebird do not actually cache results of deterministic functions.

Specifying the DETERMINISTIC clause is actually something like a “promise” that the function will return the same thing for equal inputs. At the moment, a deterministic function is considered an invariant, and works like other invariants. That is, they are computed and cached at the current execution level of a given statement.

This is easily demonstrated with an example:

CREATE FUNCTION FN_T
RETURNS DOUBLE PRECISION DETERMINISTIC
AS
BEGIN
  RETURN rand();
END;

-- the function will be evaluated twice and will return 2 different values
SELECT fn_t() FROM rdb$database
UNION ALL
SELECT fn_t() FROM rdb$database;

-- the function will be evaluated once and will return 2 identical values
WITH t (n) AS (
  SELECT 1 FROM rdb$database
  UNION ALL
  SELECT 2 FROM rdb$database
)
SELECT n, fn_t() FROM t;
Variable, Cursor and Sub-Routine Declarations

The optional declarations section, located at the start of the body of the function definition, defines variables (including cursors) and sub-routines local to the function. Local variable declarations follow the same rules as parameters regarding specification of the data type. See details in the PSQL chapter for DECLARE VARIABLE, DECLARE CURSOR, DECLARE FUNCTION, and DECLARE PROCEDURE.

Function Body

The header section is followed by the function body, consisting of one or more PSQL statements enclosed between the outer keywords BEGIN and END. Multiple BEGIN …​ END blocks of terminated statements may be embedded inside the procedure body.

External UDR Functions

A stored function can also be located in an external module. In this case, instead of a function body, the CREATE FUNCTION specifies the location of the function in the external module using the EXTERNAL clause. The optional NAME clause specifies the name of the external module, the name of the function inside the module, and — optionally — user-defined information. The required ENGINE clause specifies the name of the UDR engine that handles communication between Firebird and the external module. The optional AS clause accepts a string literal “body”, which can be used by the engine or module for various purposes.

External UDR (User Defined Routine) functions created using CREATE FUNCTION …​ EXTERNAL …​ should not be confused with legacy UDFs (User Defined Functions) declared using DECLARE EXTERNAL FUNCTION.

UDFs are deprecated, and a legacy from previous Firebird functions. Their capabilities are significantly inferior to the capabilities to the new type of external UDR functions.

Who Can Create a Function

The CREATE FUNCTION statement can be executed by:

The user who created the stored function becomes its owner.

CREATE FUNCTION Examples
  1. Creating a stored function

    CREATE FUNCTION ADD_INT (A INT, B INT DEFAULT 0)
    RETURNS INT
    AS
    BEGIN
      RETURN A + B;
    END

    Calling in a select:

    SELECT ADD_INT(2, 3) AS R FROM RDB$DATABASE

    Call inside PSQL code, the second optional parameter is not specified:

    MY_VAR = ADD_INT(A);
  2. Creating a deterministic stored function

    CREATE FUNCTION FN_E()
    RETURNS DOUBLE PRECISION DETERMINISTIC
    AS
    BEGIN
      RETURN EXP(1);
    END
  3. Creating a stored function with table column type parameters

    Returns the name of a type by field name and value

    CREATE FUNCTION GET_MNEMONIC (
      AFIELD_NAME TYPE OF COLUMN RDB$TYPES.RDB$FIELD_NAME,
      ATYPE TYPE OF COLUMN RDB$TYPES.RDB$TYPE)
    RETURNS TYPE OF COLUMN RDB$TYPES.RDB$TYPE_NAME
    AS
    BEGIN
      RETURN (SELECT RDB$TYPE_NAME
              FROM RDB$TYPES
              WHERE RDB$FIELD_NAME = :AFIELD_NAME
              AND RDB$TYPE = :ATYPE);
    END
  4. Creating an external stored function

    Create a function located in an external module (UDR). Function implementation is located in the external module udrcpp_example. The name of the function inside the module is wait_event.

    CREATE FUNCTION wait_event (
      event_name varchar (31) CHARACTER SET ascii
    ) RETURNS INTEGER
    EXTERNAL NAME 'udrcpp_example!Wait_event'
    ENGINE udr
  5. Creating a stored function containing a sub-function

    Creating a function to convert a number to hexadecimal format.

    CREATE FUNCTION INT_TO_HEX (
      ANumber BIGINT ,
      AByte_Per_Number SMALLINT = 8)
    RETURNS CHAR (66)
    AS
    DECLARE VARIABLE xMod SMALLINT ;
    DECLARE VARIABLE xResult VARCHAR (64);
    DECLARE FUNCTION TO_HEX (ANum SMALLINT ) RETURNS CHAR
      AS
      BEGIN
        RETURN CASE ANum
          WHEN 0 THEN '0'
          WHEN 1 THEN '1'
          WHEN 2 THEN '2'
          WHEN 3 THEN '3'
          WHEN 4 THEN '4'
          WHEN 5 THEN '5'
          WHEN 6 THEN '6'
          WHEN 7 THEN '7'
          WHEN 8 THEN '8'
          WHEN 9 THEN '9'
          WHEN 10 THEN 'A'
          WHEN 11 THEN 'B'
          WHEN 12 THEN 'C'
          WHEN 13 THEN 'D'
          WHEN 14 THEN 'E'
          WHEN 15 THEN 'F'
          ELSE NULL
        END;
      END
    BEGIN
      xMod = MOD (ANumber, 16);
      ANumber = ANumber / 16;
      xResult = TO_HEX (xMod);
      WHILE (ANUMBER> 0) DO
      BEGIN
        xMod = MOD (ANumber, 16);
        ANumber = ANumber / 16;
        xResult = TO_HEX (xMod) || xResult;
      END
      RETURN '0x' || LPAD (xResult, AByte_Per_Number * 2, '0' );
    END

5.9.2. ALTER FUNCTION

Used for

Modifying an existing stored function

Available in

DSQL

Syntax
ALTER FUNCTION funcname
  [ ( [ <in_params> ] ) ]
  RETURNS <domain_or_non_array_type> [COLLATE collation]
  [DETERMINISTIC]
  <module-body>

!! See syntax of CREATE FUNCTION for further rules !!

The ALTER FUNCTION statement allows the following changes to a stored function definition:

  • the set and characteristics of input and output type

  • local variables, named cursors, and sub-routines

  • code in the body of the stored procedure

For external functions (UDR), you can change the entry point and engine name. For legacy external functions declared using DECLARE EXTERNAL FUNCTION — also known as UDFs — it is not possible to convert to PSQL and vice versa.

After ALTER FUNCTION executes, existing privileges remain intact and dependencies are not affected.

Take care about changing the number and type of input parameters and the output type of a stored function. Existing application code and procedures, functions and triggers that call it could become invalid because the new description of the parameters is incompatible with the old calling format. For information on how to troubleshoot such a situation, see the article The RDB$VALID_BLR Field in the Appendix.

Who Can Alter a Function

The ALTER FUNCTION statement can be executed by:

  • Administrators

  • Owner of the stored function

  • Users with the ALTER ANY FUNCTION privilege

Examples of ALTER FUNCTION
Altering a stored function
ALTER FUNCTION ADD_INT(A INT, B INT, C INT)
RETURNS INT
AS
BEGIN
  RETURN A + B + C;
END

5.9.3. CREATE OR ALTER FUNCTION

Used for

Creating a new or modifying an existing stored function

Available in

DSQL

Syntax
CREATE OR ALTER FUNCTION funcname
  [ ( [ <in_params> ] ) ]
  RETURNS <domain_or_non_array_type> [COLLATE collation]
  [DETERMINISTIC]
  <module-body>

!! See syntax of CREATE FUNCTION for further rules !!

The CREATE OR ALTER FUNCTION statement creates a new stored function or alters an existing one. If the stored function does not exist, it will be created by invoking a CREATE FUNCTION statement transparently. If the function already exists, it will be altered and compiled (through ALTER FUNCTION) without affecting its existing privileges and dependencies.

Examples of CREATE OR ALTER FUNCTION
Create a new or alter an existing stored function
CREATE OR ALTER FUNCTION ADD_INT(A INT, B INT DEFAULT 0)
RETURNS INT
AS
BEGIN
  RETURN A + B;
END

5.9.4. DROP FUNCTION

Used for

Dropping a stored function

Available in

DSQL

Syntax
DROP FUNCTION funcname
Table 45. DROP FUNCTION Statement Parameters
Parameter Description

funcname

Stored function name consisting of up to 31 characters. Must be unique among all function names in the database.

The DROP FUNCTION statement deletes an existing stored function. If the stored function has any dependencies, the attempt to delete it will fail and the appropriate error will be raised.

Who Can Drop a Function

The DROP FUNCTION statement can be executed by:

  • Administrators

  • Owner of the stored function

  • Users with the DROP ANY FUNCTION privilege

Examples of DROP FUNCTION
DROP FUNCTION ADD_INT;

5.9.5. RECREATE FUNCTION

Used for

Creating a new stored function or recreating an existing one

Available in

DSQL

Syntax
RECREATE FUNCTION funcname
  [ ( [ <in_params> ] ) ]
  RETURNS <domain_or_non_array_type> [COLLATE collation]
  [DETERMINISTIC]
  <module-body>

!! See syntax of CREATE FUNCTION for further rules !!

The RECREATE FUNCTION statement creates a new stored function or recreates an existing one. If there is a function with this name already, the engine will try to drop it and then create a new one. Recreating an existing function will fail at COMMIT if the function has dependencies.

Be aware that dependency errors are not detected until the COMMIT phase of this operation.

After a procedure is successfully recreated, existing privileges to execute the stored function and the privileges of the stored function itself are dropped.

Examples of RECREATE FUNCTION
Creating or recreating a stored function
RECREATE FUNCTION ADD_INT(A INT, B INT DEFAULT 0)
RETURNS INT
AS
BEGIN
  RETURN A + B;
EN

5.10. EXTERNAL FUNCTION

External functions, also known as “User-Defined Functions” (UDFs) are programs written in an external programming language and stored in dynamically loaded libraries. Once declared in a database, they become available in dynamic and procedural statements as though they were implemented in the SQL language.

External functions extend the possibilities for processing data with SQL considerably. To make a function available to a database, it is declared using the statement DECLARE EXTERNAL FUNCTION.

The library containing a function is loaded when any function included in it is called.

External functions declared as DECLARE EXTERNAL FUNCTION are a legacy from previous versions of Firebird. Their capabilities are inferior to the capabilities of the new type of external functions, UDR (User-Defined Routine). Such functions are declared as CREATE FUNCTION …​ EXTERNAL …​. See CREATE FUNCTION for details.

External functions may be contained in more than one library — or “module”, as it is referred to in the syntax.

UDFs are fundamentally insecure. We recommend avoiding their use whenever possible, and disabling UDFs in your database configuration (UdfAccess = None in firebird.conf). If you do need to call native code from your database, use a UDR external engine instead.

See also

FUNCTION

5.10.1. DECLARE EXTERNAL FUNCTION

Used for

Declaring a user-defined function (UDF) to the database

Available in

DSQL, ESQL

Syntax
DECLARE EXTERNAL FUNCTION funcname
  [{ <arg_desc_list> | ( <arg_desc_list> ) }]
  RETURNS { <return_value> | ( <return_value> ) }
  ENTRY_POINT 'entry_point' MODULE_NAME 'library_name'

<arg_desc_list> ::=
  <arg_type_decl> [, <arg_type_decl> ...]

<arg_type_decl> ::=
  <udf_data_type> [BY {DESCRIPTOR | SCALAR_ARRAY} | NULL]

<udf_data_type> ::=
    <scalar_datatype>
  | BLOB
  | CSTRING(length) [ CHARACTER SET charset ]

<scalar_datatype> ::=
  !! See Scalar Data Types Syntax !!

<return_value> ::=
  { <udf_data_type> | PARAMETER param_num }
  [{ BY VALUE | BY DESCRIPTOR [FREE_IT] | FREE_IT }]
Table 46. DECLARE EXTERNAL FUNCTION Statement Parameters
Parameter Description

funcname

Function name in the database. It may consist of up to 31 characters. It should be unique among all internal and external function names in the database and need not be the same name as the name exported from the UDF library via ENTRY_POINT.

entry_point

The exported name of the function

library_name

The name of the module (MODULE_NAME) from which the function is exported. This will be the name of the file, without the “.dll” or “.so” file extension.

length

The maximum length of a null-terminated string, specified in bytes

charset

Character set of the CSTRING

param_num

The number of the input parameter, numbered from 1 in the list of input parameters in the declaration, describing the data type that will be returned by the function

The DECLARE EXTERNAL FUNCTION statement makes a user-defined function available in the database. UDF declarations must be made in each database that is going to use them. There is no need to declare UDFs that will never be used.

The name of the external function must be unique among all function names. It may be different from the exported name of the function, as specified in the ENTRY_POINT argument.

DECLARE EXTERNAL FUNCTION Input Parameters

The input parameters of the function follow the name of the function and are separated with commas. Each parameter has an SQL data type specified for it. Arrays cannot be used as function parameters. In addition to the SQL types, the CSTRING type is available for specifying a null-terminated string with a maximum length of LENGTH bytes. There are several mechanisms for passing a parameter from the Firebird engine to an external function, each of these mechanisms will be discussed below.

By default, input parameters are passed by reference. There is no separate clause to explicitly indicate that parameters are passed by reference.

When passing a NULL value by reference, it is converted to the equivalent of zero, for example, a number ‘0’ or an empty string (“''”). If the keyword NULL is specified after a parameter, then with passing a NULL values, the null pointer will be passed to the external function.

Declaring a function with the NULL keyword does not guarantee that the function will correctly handle a NULL input parameter. Any function must be written or rewritten to correctly handle NULL values. Always use the function declaration as provided by its developer.

If BY DESCRIPTOR is specified, then the input parameter is passed by descriptor. In this case, the UDF parameter will receive a pointer to an internal structure known as a descriptor. The descriptor contains information about the datatype, subtype, precision, character set and collation, scale, a pointer to the data itself and some flags, including the NULL indicator. This declaration only works if the external function is written using a handle.

When passing a function parameter by descriptor, the passed value is not cast to the declared data type.

The BY SCALAR_ARRAY clause is used when passing arrays as input parameters. Unlike other types, you cannot return an array from a UDF.

Clauses and Keywords
RETURNS clause

(Required) specifies the output parameter returned by the function. A function is scalar, it returns one value (output parameter). The output parameter can be of any SQL type (except an array or an array element) or a null-terminated string (CSTRING). The output parameter can be passed by reference (the default), by descriptor or by value. If the BY DESCRIPTOR clause is specified, the output parameter is passed by descriptor. If the BY VALUE clause is specified, the output parameter is passed by value.

PARAMETER keyword

specifies that the function returns the value from the parameter under number param_num. It is necessary if you need to return a value of data type BLOB.

FREE_IT keyword

means that the memory allocated for storing the return value will be freed after the function is executed. It is used only if the memory was allocated dynamically in the UDF. In such a UDF, the memory must be allocated with the help of the ib_util_malloc function from the ib_util module, a requirement for compatibility with the functions used in Firebird code and in the code of the shipped UDF modules, for allocating and freeing memory.

ENTRY_POINT clause

specifies the name of the entry point (the name of the imported function), as exported from the module.

MODULE_NAME clause

defines the name of the module where the exported function is located. The link to the module should not be the full path and extension of the file, if that can be avoided. If the module is located in the default location (in the ../UDF subdirectory of the Firebird server root) or in a location explicitly configured in firebird.conf, it makes it easier to move the database between different platforms. The UDFAccess parameter in the firebird.conf file allows access restrictions to external functions modules to be configured.

Any user connected to the database can declare an external function (UDF).

Who Can Create an External Function

The DECLARE EXTERNAL FUNCTION statement can be executed by:

The user who created the function becomes its owner.

Examples using DECLARE EXTERNAL FUNCTION
  1. Declaring the addDay external function located in the fbudf module. The input and output parameters are passed by reference.

    DECLARE EXTERNAL FUNCTION addDay
      TIMESTAMP, INT
      RETURNS TIMESTAMP
      ENTRY_POINT 'addDay' MODULE_NAME 'fbudf';
  2. Declaring the invl external function located in the fbudf module. The input and output parameters are passed by descriptor.

    DECLARE EXTERNAL FUNCTION invl
      INT BY DESCRIPTOR, INT BY DESCRIPTOR
      RETURNS INT BY DESCRIPTOR
      ENTRY_POINT 'idNvl' MODULE_NAME 'fbudf';
  3. Declaring the isLeapYear external function located in the fbudf module. The input parameter is passed by reference, while the output parameter is passed by value.

    DECLARE EXTERNAL FUNCTION isLeapYear
      TIMESTAMP
      RETURNS INT BY VALUE
      ENTRY_POINT 'isLeapYear' MODULE_NAME 'fbudf';
  4. Declaring the i64Truncate external function located in the fbudf module. The input and output parameters are passed by descriptor. The second parameter of the function is used as the return value.

    DECLARE EXTERNAL FUNCTION i64Truncate
      NUMERIC(18) BY DESCRIPTOR, NUMERIC(18) BY DESCRIPTOR
      RETURNS PARAMETER 2
      ENTRY_POINT 'fbtruncate' MODULE_NAME 'fbudf';

5.10.2. ALTER EXTERNAL FUNCTION

Used for

Changing the entry point and/or the module name for a user-defined function (UDF)

Available in

DSQL

Syntax
ALTER EXTERNAL FUNCTION funcname
  [ENTRY_POINT 'new_entry_point']
  [MODULE_NAME 'new_library_name']
Table 47. ALTER EXTERNAL FUNCTION Statement Parameters
Parameter Description

funcname

Function name in the database

new_entry_point

The new exported name of the function

new_library_name

The new name of the module (MODULE_NAME from which the function is exported). This will be the name of the file, without the “.dll” or “.so” file extension.

The ALTER EXTERNAL FUNCTION statement changes the entry point and/or the module name for a user-defined function (UDF). Existing dependencies remain intact after the statement containing the change(s) is executed.

The ENTRY_POINT clause

is for specifying the new entry point (the name of the function as exported from the module).

The MODULE_NAME clause

is for specifying the new name of the module where the exported function is located.

Any user connected to the database can change the entry point and the module name.

Who Can Alter an External Function

The ALTER EXTERNAL FUNCTION statement can be executed by:

  • Administrators

  • Owner of the external function

  • Users with the ALTER ANY FUNCTION privilege

Examples using ALTER EXTERNAL FUNCTION
Changing the entry point for an external function
ALTER EXTERNAL FUNCTION invl ENTRY_POINT 'intNvl';
Changing the module name for an external function
ALTER EXTERNAL FUNCTION invl MODULE_NAME 'fbudf2';

5.10.3. DROP EXTERNAL FUNCTION

Used for

Removing a user-defined function (UDF) from a database

Available in

DSQL, ESQL

Syntax
DROP EXTERNAL FUNCTION funcname
Table 48. DROP EXTERNAL FUNCTION Statement Parameter
Parameter Description

funcname

Function name in the database

The DROP EXTERNAL FUNCTION statement deletes the declaration of a user-defined function from the database. If there are any dependencies on the external function, the statement will fail and the appropriate error will be raised.

Any user connected to the database can delete the declaration of an internal function.

Who Can Drop an External Function

The DROP EXTERNAL FUNCTION statement can be executed by:

  • Administrators

  • Owner of the external function

  • Users with the DROP ANY FUNCTION privilege

Example using DROP EXTERNAL FUNCTION
Deleting the declaration of the addDay function.
DROP EXTERNAL FUNCTION addDay;

5.11. PACKAGE

A package is a group of procedures and functions managed as one entity.

5.11.1. CREATE PACKAGE

Used for

Declaring the package header

Available in

DSQL

Syntax
CREATE PACKAGE package_name
AS
BEGIN
  [ <package_item> ... ]
END

<package_item> ::=
    <function_decl>;
  | <procedure_decl>;

<function_decl> ::=
  FUNCTION funcname [ ( [ <in_params> ] ) ]
  RETURNS <domain_or_non_array_type> [COLLATE collation]
  [DETERMINISTIC]

<procedure_decl> ::=
  PROCEDURE procname [ ( [ <in_params> ] ) ]
  [RETURNS (<out_params>)]

<in_params> ::= <inparam> [, <inparam> ... ]

<inparam> ::= <param_decl> [ { = | DEFAULT } <value> ]

<out_params> ::= <outparam> [, <outparam> ...]

<outparam> ::= <param_decl>

<value> ::= { literal | NULL | context_var }

<param-decl> ::= paramname <domain_or_non_array_type> [NOT NULL]
  [COLLATE collation]

<domain_or_non_array_type> ::=
  !! See Scalar Data Types Syntax !!
Table 49. CREATE PACKAGE Statement Parameters
Parameter Description

package_name

Package name consisting of up to 31 characters. The package name must be unique among all package names.

function_decl

Function declaration

procedure_decl

Procedure declaration

func_name

Function name consisting of up to 31 characters. The function name must be unique within the package.

proc_name

Procedure name consisting of up to 31 characters. The function name must be unique within the package.

collation

Collation sequence

inparam

Input parameter declaration

outparam

Output parameter declaration

literal

A literal value that is assignment-compatible with the data type of the parameter

context_var

Any context variable that is assignment-compatible with the data type of the parameter

paramname

The name of an input parameter of a procedure or function, or an output parameter of a procedure. It may consist of up to 31 characters. The name of the parameter must be unique among input and output parameters of the procedure or function.

The CREATE PACKAGE statement creates a new package header. Routines (procedures and functions) declared in the package header are available outside the package using the full identifier (package_name.proc_name or package_name.func_name). Routines defined only in the package body — but not in the package header — are not visible outside the package.

Package procedure and function names may shadow global routines

If a package header or package body declares a procedure or function with the same name as a stored procedure or function in the global namespace, it is not possible to call that global procedure or function from the package body. In this case, the procedure or function of the package will always be called.

For this reason, it is recommended that the names of stored procedures and functions in packages do not overlap with names of stored procedures and functions in the global namespace.

Statement Terminators

Some SQL statement editors — specifically the isql utility that comes with Firebird and possibly some third-party editors — employ an internal convention that requires all statements to be terminated with a semi-colon. This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the Terminator in isql.

Procedure and Function Parameters

For details on stored procedure parameters, see Parameters in CREATE PROCEDURE.

For details on function parameters, see Parameters in CREATE FUNCTION.

Who Can Create a Package

The CREATE PACKAGE statement can be executed by:

The user who created the package header becomes its owner.

Examples of CREATE PACKAGE
Create a package header
CREATE PACKAGE APP_VAR
AS
BEGIN
  FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC;
  FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC;
  PROCEDURE SET_DATERANGE(ADATEBEGIN DATE,
      ADATEEND DATE DEFAULT CURRENT_DATE);
END

5.11.2. ALTER PACKAGE

Used for

Altering the package header

Available in

DSQL

Syntax
ALTER PACKAGE package_name
AS
BEGIN
  [ <package_item> ... ]
END

!! See syntax of CREATE PACKAGE for further rules!!

The ALTER PACKAGE statement modifies the package header. It can be used to change the number and definition of procedures and functions, including their input and output parameters. However, the source and compiled form of the package body is retained, though the body might be incompatible after the change to the package header. The validity of a package body for the defined header is stored in the column RDB$PACKAGES.RDB$VALID_BODY_FLAG.

Who Can Alter a Package

The ALTER PACKAGE statement can be executed by:

  • Administrators

  • The owner of the package

  • Users with the ALTER ANY PACKAGE privilege

Examples of ALTER PACKAGE
Modifying a package header
ALTER PACKAGE APP_VAR
AS
BEGIN
  FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC;
  FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC;
  PROCEDURE SET_DATERANGE(ADATEBEGIN DATE,
      ADATEEND DATE DEFAULT CURRENT_DATE);
END

5.11.3. CREATE OR ALTER PACKAGE

Used for

Creating a new or altering an existing package header

Available in

DSQL

Syntax
CREATE OR ALTER PACKAGE package_name
AS
BEGIN
  [ <package_item> ... ]
END

!! See syntax of CREATE PACKAGE for further rules!!

The CREATE OR ALTER PACKAGE statement creates a new package or modifies an existing package header. If the package header does not exist, it will be created using CREATE PACKAGE. If it already exists, then it will be modified using ALTER PACKAGE while retaining existing privileges and dependencies.

Examples of CREATE OR ALTER PACKAGE
Creating a new or modifying an existing package header
CREATE OR ALTER PACKAGE APP_VAR
AS
BEGIN
  FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC;
  FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC;
  PROCEDURE SET_DATERANGE(ADATEBEGIN DATE,
      ADATEEND DATE DEFAULT CURRENT_DATE);
END

5.11.4. DROP PACKAGE

Used for

Dropping a package header

Available in

DSQL

Syntax
DROP PACKAGE package_name
Table 50. DROP PACKAGE Statement Parameters
Parameter Description

package_name

Package name

The DROP PACKAGE statement deletes an existing package header. If a package body exists, it will be dropped together with the package header. If there are still dependencies on the package, an error will be raised.

Who Can Drop a Package

The DROP PACKAGE statement can be executed by:

  • Administrators

  • The owner of the package

  • Users with the DROP ANY PACKAGE privilege

Examples of DROP PACKAGE
Dropping a package header
DROP PACKAGE APP_VAR

5.11.5. RECREATE PACKAGE

Used for

Creating a new or recreating an existing package header

Available in

DSQL

Syntax
RECREATE PACKAGE package_name
AS
BEGIN
  [ <package_item> ... ]
END

!! See syntax of CREATE PACKAGE for further rules!!

The RECREATE PACKAGE statement creates a new package or recreates an existing package header. If a package header with the same name already exists, then this statement will first drop it and then create a new package header. It is not possible to recreate the package header if there are still dependencies on the existing package, or if the body of the package exists. Existing privileges of the package itself are not preserved, nor are privileges to execute the procedures or functions of the package.

Examples of RECREATE PACKAGE
Creating a new or recreating an existing package header
RECREATE PACKAGE APP_VAR
AS
BEGIN
  FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC;
  FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC;
  PROCEDURE SET_DATERANGE(ADATEBEGIN DATE,
      ADATEEND DATE DEFAULT CURRENT_DATE);
END

5.12. PACKAGE BODY

5.12.1. CREATE PACKAGE BODY

Used for

Creating the package body

Available in

DSQL

Syntax
CREATE PACKAGE BODY name
AS
BEGIN
  [ <package_item> ... ]
  [ <package_body_item> ... ]
END

<package_item> ::=
  !! See CREATE PACKAGE syntax !!

<package_body_item> ::=
  <function_impl> |
  <procedure_impl>

<function_impl> ::=
  FUNCTION funcname [ ( [ <in_params> ] ) ]
  RETURNS <domain_or_non_array_type> [COLLATE collation]
  [DETERMINISTIC]
  <module-body>

<procedure_impl> ::=
  PROCEDURE procname [ ( [ <in_params> ] ) ]
  [RETURNS (<out_params>)]
  <module-body>

<module-body> ::=
  !! See Syntax of Module Body !!

<in_params> ::=
  !! See CREATE PACKAGE syntax !!
  !! See also Rules below !!

<out_params> ::=
  !! See CREATE PACKAGE syntax !!

<domain_or_non_array_type> ::=
  !! See Scalar Data Types Syntax !!
Table 51. CREATE PACKAGE BODY Statement Parameters
Parameter Description

package_name

Package name consisting of up to 31 characters. The package name must be unique among all package names.

function_impl

Function implementation. Essentially a CREATE FUNCTION statement without CREATE.

procedure_impl

Procedure implementation Essentially a CREATE PROCEDURE statement without CREATE.

func_name

Function name consisting of up to 31 characters. The function name must be unique within the package.

collation

Collation sequence

proc_name

Procedure name consisting of up to 31 characters. The function name must be unique within the package.

The CREATE PACKAGE BODY statement creates a new package body. The package body can only be created after the package header has been created. If there is no package header with name package_name, an appropriate error will be raised.

All procedures and functions declared in the package header must be implemented in the package body. Additional procedures and functions may be defined and implemented in the package body only. Procedure and functions defined in the package body, but not defined in the package header are not visible outside the package body.

The names of procedures and functions defined in the package body must be unique among the names of procedures and functions defined in the package header and implemented in the package body.

Package procedure and function names may shadow global routines

If a package header or package body declares a procedure or function with the same name as a stored procedure or function in the global namespace, it is not possible to call that global procedure or function from the package body. In this case, the procedure or function of the package will always be called.

For this reason, it is recommended that the names of stored procedures and functions in packages do not overlap with names of stored procedures and functions in the global namespace.

Rules
  • In the package body, all procedures and functions must be implemented with the same signature as declared in the header and at the beginning of the package body

  • The default values for procedure or function parameters cannot be overridden (as specified in the package header or in <package_item>). This means default values can only be defined in <package_body_item> for procedures or functions that have not been defined in the package header or earlier in the package body.

UDF declarations (DECLARE EXTERNAL FUNCTION) is not supported for packages. Use UDR instead.

Who Can Create a Package Body

The CREATE PACKAGE BODY statement can be executed by:

  • Administrators

  • The owner of the package

  • Users with the ALTER ANY PACKAGE privilege

Examples of CREATE PACKAGE BODY
Creating the package body
CREATE PACKAGE BODY APP_VAR
AS
BEGIN
  - Returns the start date of the period
  FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC
  AS
  BEGIN
    RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEBEGIN');
  END
  - Returns the end date of the period
  FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC
  AS
  BEGIN
    RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEEND');
  END
  - Sets the date range of the working period
  PROCEDURE SET_DATERANGE(ADATEBEGIN DATE, ADATEEND DATE)
  AS
  BEGIN
    RDB$SET_CONTEXT('USER_SESSION', 'DATEBEGIN', ADATEBEGIN);
    RDB$SET_CONTEXT('USER_SESSION', 'DATEEND', ADATEEND);
  END
END

5.12.2. ALTER PACKAGE BODY

Used for

Altering the package body

Available in

DSQL

Syntax
ALTER PACKAGE BODY name
AS
BEGIN
  [ <package_item> ... ]
  [ <package_body_item> ... ]
END

!! See syntax of CREATE PACKAGE BODY for further rules !!

The ALTER PACKAGE BODY statement modifies the package body. It can be used to change the definition and implementation of procedures and functions of the package body.

See CREATE PACKAGE BODY for more details.

Who Can Alter a Package Body

The ALTER PACKAGE BODY statement can be executed by:

  • Administrators

  • The owner of the package

  • Users with the ALTER ANY PACKAGE privilege

Examples of ALTER PACKAGE BODY
Modifying the package body
ALTER PACKAGE BODY APP_VAR
AS
BEGIN
  - Returns the start date of the period
  FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC
  AS
  BEGIN
    RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEBEGIN');
  END
  - Returns the end date of the period
  FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC
  AS
  BEGIN
    RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEEND');
  END
  - Sets the date range of the working period
  PROCEDURE SET_DATERANGE(ADATEBEGIN DATE, ADATEEND DATE)
  AS
  BEGIN
    RDB$SET_CONTEXT('USER_SESSION', 'DATEBEGIN', ADATEBEGIN);
    RDB$SET_CONTEXT('USER_SESSION', 'DATEEND', ADATEEND);
  END
END

5.12.3. DROP PACKAGE BODY

Used for

Dropping a package body

Available in

DSQL

Syntax
DROP PACKAGE package_name
Table 52. DROP PACKAGE BODY Statement Parameters
Parameter Description

package_name

Package name

The DROP PACKAGE BODY statement deletes the package body.

Who Can Drop a Package Body

The DROP PACKAGE BODY statement can be executed by:

  • Administrators

  • The owner of the package

  • Users with the ALTER ANY PACKAGE privilege

Examples of DROP PACKAGE BODY
Dropping the package body
DROP PACKAGE BODY APP_VAR;

5.12.4. RECREATE PACKAGE BODY

Used for

Creating a new or recreating an existing package body

Available in

DSQL

Syntax
RECREATE PACKAGE BODY name
AS
BEGIN
  [ <package_item> ... ]
  [ <package_body_item> ... ]
END

!! See syntax of CREATE PACKAGE BODY for further rules !!

The RECREATE PACKAGE BODY statement creates a new or recreates an existing package body. If a package body with the same name already exists, the statement will try to drop it and then create a new package body. After recreating the package body, privileges of the package and its routines are preserved.

See CREATE PACKAGE BODY for more details.

Examples of RECREATE PACKAGE BODY
Recreating the package body
RECREATE PACKAGE BODY APP_VAR
AS
BEGIN
  - Returns the start date of the period
  FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC
  AS
  BEGIN
    RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEBEGIN');
  END
  - Returns the end date of the period
  FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC
  AS
  BEGIN
    RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEEND');
  END
  - Sets the date range of the working period
  PROCEDURE SET_DATERANGE(ADATEBEGIN DATE, ADATEEND DATE)
  AS
  BEGIN
    RDB$SET_CONTEXT('USER_SESSION', 'DATEBEGIN', ADATEBEGIN);
    RDB$SET_CONTEXT('USER_SESSION', 'DATEEND', ADATEEND);
  END
END

5.13. FILTER

A BLOB FILTER is a database object that is a special type of external function, with the sole purpose of taking a BLOB object in one format and converting it to a BLOB object in another format. The formats of the BLOB objects are specified with user-defined BLOB subtypes.

External functions for converting BLOB types are stored in dynamic libraries and loaded when necessary.

For more details on BLOB subtypes, see Binary Data Types.

5.13.1. DECLARE FILTER

Used for

Declaring a BLOB filter to the database

Available in

DSQL, ESQL

Syntax
DECLARE FILTER filtername
  INPUT_TYPE <sub_type> OUTPUT_TYPE <sub_type>
  ENTRY_POINT 'function_name' MODULE_NAME 'library_name'

<sub_type> ::= number | <mnemonic>

<mnemonic> ::=
    BINARY | TEXT | BLR | ACL | RANGES
  | SUMMARY | FORMAT | TRANSACTION_DESCRIPTION
  | EXTERNAL_FILE_DESCRIPTION | user_defined
Table 53. DECLARE FILTER Statement Parameters
Parameter Description

filtername

Filter name in the database. It may consist of up to 31 characters. It need not be the same name as the name exported from the filter library via ENTRY_POINT.

sub_type

BLOB subtype

number

BLOB subtype number (must be negative)

mnemonic

BLOB subtype mnemonic name

function_name

The exported name (entry point) of the function

library_name

The name of the module where the filter is located

user_defined

User-defined BLOB subtype mnemonic name

The DECLARE FILTER statement makes a BLOB filter available to the database. The name of the BLOB filter must be unique among the names of BLOB filters.

Specifying the Subtypes

The subtypes can be specified as the subtype number or as the subtype mnemonic name. Custom subtypes must be represented by negative numbers (from -1 to -32,768). An attempt to declare more than one BLOB filter with the same combination of the input and output types will fail with an error.

INPUT_TYPE

clause defining the BLOB subtype of the object to be converted

OUTPUT_TYPE

clause defining the BLOB subtype of the object to be created.

Mnemonic names can be defined for custom BLOB subtypes and inserted manually into the system table RDB$TYPES system table:

INSERT INTO RDB$TYPES (RDB$FIELD_NAME, RDB$TYPE, RDB$TYPE_NAME)
VALUES ('RDB$FIELD_SUB_TYPE', -33, 'MIDI');

After the transaction is committed, the mnemonic names can be used in declarations when you create new filters.

The value of the column RDB$FIELD_NAME must always be 'RDB$FIELD_SUB_TYPE'. If a mnemonic names was defined in upper case, they can be used case-insensitively and without quotation marks when a filter is declared, following the rules for other object names.

Warning

From Firebird 3.0 onward, the system tables will no longer be writable by users. However, inserting custom types into RDB$TYPES is still possible. Firebird 4 will introduce a system privilege CREATE_USER_TYPES for creating custom subtypes.

Parameters
ENTRY_POINT

clause defining the name of the entry point (the name of the imported function) in the module.

MODULE_NAME

The clause defining the name of the module where the exported function is located. By default, modules must be located in the UDF folder of the root directory on the server. The UDFAccess parameter in firebird.conf allows editing of access restrictions to filter libraries.

Any user connected to the database can declare a BLOB filter.

Who Can Create a BLOB Filter?

The DECLARE FILTER statement can be executed by:

The user executing the DECLARE FILTER statement becomes the owner of the filter.

Examples of DECLARE FILTER
  1. Creating a BLOB filter using subtype numbers.

    DECLARE FILTER DESC_FILTER
      INPUT_TYPE 1
      OUTPUT_TYPE -4
      ENTRY_POINT 'desc_filter'
      MODULE_NAME 'FILTERLIB';
  2. Creating a BLOB filter using subtype mnemonic names.

    DECLARE FILTER FUNNEL
      INPUT_TYPE blr OUTPUT_TYPE text
      ENTRY_POINT 'blr2asc' MODULE_NAME 'myfilterlib';
See also

DROP FILTER

5.13.2. DROP FILTER

Used for

Removing a BLOB filter declaration from the database

Available in

DSQL, ESQL

Syntax
DROP FILTER filtername
Table 54. DROP FILTER Statement Parameter
Parameter Description

filtername

Filter name in the database

The DROP FILTER statement removes the declaration of a BLOB filter from the database. Removing a BLOB filter from a database makes it unavailable for use from that database. The dynamic library where the conversion function is located remains intact and the removal from one database does not affect other databases in which the same BLOB filter is still declared.

Who Can Drop a BLOB Filter?

The DROP FILTER statement can be executed by:

  • Administrators

  • The owner of the filter

  • Users with the DROP ANY FILTER privilege

DROP FILTER Example
Dropping a BLOB filter.
DROP FILTER DESC_FILTER;
See also

DECLARE FILTER

5.14. SEQUENCE (GENERATOR)

A sequence or a generator is a database object used to get unique number values to fill a series. “Sequence” is the SQL-compliant term for the same thing which, in Firebird, has traditionally been known as “generator”. Firebird has syntax for both terms.

Sequences (or generators) are always stored as 64-bit integers, regardless of the SQL dialect of the database.

If a client is connected using Dialect 1, the server sends sequence values to it as 32-bit integers. Passing a sequence value to a 32-bit field or variable will not cause errors as long as the current value of the sequence does not exceed the limits of a 32-bit number. However, as soon as the sequence value exceeds this limit, a database in Dialect 3 will produce an error. A database in Dialect 1 will keep truncating the values, which will compromise the uniqueness of the series.

This section describes how to create, alter, set and drop sequences.

5.14.1. CREATE SEQUENCE

Used for

Creating a new SEQUENCE (GENERATOR)

Available in

DSQL, ESQL

Syntax
CREATE {SEQUENCE | GENERATOR} seq_name
  [START WITH start_value]
  [INCREMENT [BY] increment]
Table 55. CREATE SEQUENCE Statement Parameters
Parameter Description

seq_name

Sequence (generator) name. It may consist of up to 31 characters

start_value

Initial value of the sequence

increment

Increment of the sequence (when using NEXT VALUE FOR seq_name); cannot be 0

The statements CREATE SEQUENCE and CREATE GENERATOR are synonymous — both create a new sequence. Either can be used, but CREATE SEQUENCE is recommended as that is the syntax defined in the SQL standard.

When a sequence is created, its value is set to the value specified in the option START WITH clause. If there is no START WITH clause, then the sequence is set to 0.

The optional INCREMENT [BY] clause allows you to specify an increment for the NEXT VALUE FOR seq_name expression. By default, the increment is 1 (one). The increment cannot be set to 0 (zero). The GEN_ID(seq_name, <step>) function can be called instead, to “step” the series by a different integer number. The increment specified through INCREMENT [BY] is not used for GEN_ID.

Bug with START WITH and INCREMENT [BY]

The SQL standard specifies that the START WITH clause should specify the initial value generated on the first call to NEXT VALUE FOR seq_name, but instead Firebird uses it to set the current value of the sequence. As a result the first call to NEXT VALUE FOR seq_name incorrectly generates the value start_value + increment.

Creating a sequence without a START WITH clause is currently equivalent to specifying START WITH 0, while it should be equivalent to START WITH 1.

This will be fixed in Firebird 4, see also CORE-6084

Non-standard behaviour for negative increments

The SQL standard specifies that sequences with a negative increment should start at the maximum value of the sequence (263 - 1) and count down. Firebird does not do that, and instead starts at 0 + increment.

This may change in a future Firebird version.

Who Can Create a Sequence?

The CREATE SEQUENCE (CREATE GENERATOR) statement can be executed by:

  • Administrators

  • Users with the CREATE SEQUENCE (CREATE GENERATOR) privilege

The user executing the CREATE SEQUENCE (CREATE GENERATOR) statement becomes its owner.

Examples of CREATE SEQUENCE
  1. Creating the EMP_NO_GEN sequence using CREATE SEQUENCE.

    CREATE SEQUENCE EMP_NO_GEN;
  2. Creating the EMP_NO_GEN sequence using CREATE GENERATOR.

    CREATE GENERATOR EMP_NO_GEN;
  3. Creating the EMP_NO_GEN sequence with an initial value of 5 and an increment of 1. See note Bug with START WITH and INCREMENT [BY].

    CREATE SEQUENCE EMP_NO_GEN START WITH 5;
  4. Creating the EMP_NO_GEN sequence with an initial value of 1 and an increment of 10. See note Bug with START WITH and INCREMENT [BY].

    CREATE SEQUENCE EMP_NO_GEN INCREMENT BY 10;
  5. Creating the EMP_NO_GEN sequence with an initial value of 5 and an increment of 10. See note Bug with START WITH and INCREMENT [BY].

    CREATE SEQUENCE EMP_NO_GEN START WITH 5 INCREMENT BY 10;

5.14.2. ALTER SEQUENCE

Used for

Setting the value of a sequence or generator to a specified value

Available in

DSQL

Syntax
ALTER {SEQUENCE | GENERATOR} seq_name
  [RESTART [WITH newvalue]]
  [INCREMENT [BY] increment]
Table 56. ALTER SEQUENCE Statement Parameters
Parameter Description

seq_name

Sequence (generator) name

newvalue

New sequence (generator) value. A 64-bit integer from -2-63 to 263-1.

increment

Increment of the sequence (when using NEXT VALUE FOR seq_name); cannot be 0

The ALTER SEQUENCE statement sets the current value of a sequence or generator to the specified value and/or changes the increment of the sequence.

The RESTART WITH newvalue clause allows you to set the value of a sequence. The RESTART clause (without WITH) restarts the sequence with the initial value configured using the START WITH clause when the sequence was created.

Bugs with RESTART

The initial value (either saved in the metadata or specified in the WITH clause) is used to set the current value of the sequence, instead of the next value generated as required by the SQL standard. See note Bug with START WITH and INCREMENT [BY] for more information.

In addition, RESTART WITH newvalue will not only restart the sequence with the specified value, but also store newvalue as the new initial value of the sequence. This means that a subsequent ALTER SEQUENCE RESTART will also use newvalue. This behaviour does not match the behaviour specified in the SQL standard.

This bug will be fixed in Firebird 4, see also CORE-6386

Incorrect use of the ALTER SEQUENCE statement (changing the current value of the sequence or generator) is likely to break the logical integrity of data.

INCREMENT [BY] allows you to change the sequence increment for the NEXT VALUE FOR expression.

Changing the increment value takes effect for all queries that run after the transaction commits. Procedures that are called for the first time after changing the commit, will use the new value if they use NEXT VALUE FOR. Procedures that were already used (and cached in the metadata cache) will continue to use the old increment. You may need to close all connections to the database for the metadata cache to clear, and the new increment to be used. Procedures using NEXT VALUE FOR do not need to be recompiled to see the new increment. Procedures using GEN_ID(gen, expression) are not affected when the increment is changed.

Who Can Alter a Sequence?

The ALTER SEQUENCE (ALTER GENERATOR) statement can be executed by:

  • Administrators

  • The owner of the sequence

  • Users with the ALTER ANY SEQUENCE (ALTER ANY GENERATOR) privilege

Examples of ALTER SEQUENCE
  1. Setting the value of the EMP_NO_GEN sequence to 145.

    ALTER SEQUENCE EMP_NO_GEN RESTART WITH 145;
  2. Resetting the base value of the sequence EMP_NO_GEN to the initial value stored in the metadata

    ALTER SEQUENCE EMP_NO_GEN RESTART;
  3. Changing the increment of sequence EMP_NO_GEN to 10

    ALTER SEQUENCE EMP_NO_GEN INCREMENT BY 10;

5.14.3. CREATE OR ALTER SEQUENCE

Used for

Creating a new or modifying an existing sequence

Available in

DSQL, ESQL

Syntax
CREATE OR ALTER {SEQUENCE | GENERATOR} seq_name
  {RESTART | START WITH start_value}
  [INCREMENT [BY] increment]
Table 57. CREATE OR ALTER SEQUENCE Statement Parameters
Parameter Description

seq_name

Sequence (generator) name. It may consist of up to 31 characters

start_value

Initial value of the sequence

increment

Increment of the sequence (when using NEXT VALUE FOR seq_name); cannot be 0

If the sequence does not exist, it will be created. An existing sequence will be changed:

  • If RESTART is specified, the sequence will restarted with the initial value stored in the metadata

  • If the START WITH clause is specified, start_value is stored as the initial value in the metadata, and the sequence is restarted

  • If the INCREMENT [BY] clause is specified, increment is stored as the increment in the metadata, and used for subsequent calls to NEXT VALUE FOR

Example of CREATE OR ALTER SEQUENCE
Create a new or modify an existing sequence EMP_NO_GEN
CREATE OR ALTER SEQUENCE EMP_NO_GEN
  START WITH 10
  INCREMENT BY 1

5.14.4. DROP SEQUENCE

Used for

Dropping (deleting) a SEQUENCE (GENERATOR)

Available in

DSQL, ESQL

Syntax
DROP {SEQUENCE | GENERATOR} seq_name
Table 58. DROP SEQUENCE Statement Parameter
Parameter Description

seq_name

Sequence (generator) name. It may consist of up to 31 characters

The statements DROP SEQUENCE and DROP GENERATOR statements are equivalent: both drop (delete) an existing sequence (generator). Either is valid but DROP SEQUENCE, being defined in the SQL standard, is recommended.

The statements will fail if the sequence (generator) has dependencies.

Who Can Drop a Sequence?

The DROP SEQUENCE (DROP GENERATOR) statement can be executed by:

  • Administrators

  • The owner of the sequence

  • Users with the DROP ANY SEQUENCE (DROP ANY GENERATOR) privilege

Example of DROP SEQUENCE
Dropping the EMP_NO_GEN series:
DROP SEQUENCE EMP_NO_GEN;

5.14.5. RECREATE SEQUENCE

Used for

Creating or recreating a sequence (generator)

Available in

DSQL, ESQL

Syntax
RECREATE {SEQUENCE | GENERATOR} seq_name
  [START WITH start_value]
  [INCREMENT [BY] increment]
Table 59. RECREATE SEQUENCE Statement Parameters
Parameter Description

seq_name

Sequence (generator) name. It may consist of up to 31 characters

start_value

Initial value of the sequence

increment

Increment of the sequence (when using NEXT VALUE FOR seq_name); cannot be 0

See CREATE SEQUENCE for the full syntax of CREATE SEQUENCE and descriptions of defining a sequences and its options.

RECREATE SEQUENCE creates or recreates a sequence. If a sequence with this name already exists, the RECREATE SEQUENCE statement will try to drop it and create a new one. Existing dependencies will prevent the statement from executing.

Example of RECREATE SEQUENCE
Recreating sequence EMP_NO_GEN
RECREATE SEQUENCE EMP_NO_GEN
  START WITH 10
  INCREMENT BY 2;

5.14.6. SET GENERATOR

Used for

Setting the value of a sequence or generator to a specified value

Available in

DSQL, ESQL

Syntax
SET GENERATOR seq_name TO new_val
Table 60. SET GENERATOR Statement Parameters
Parameter Description

seq_name

Generator (sequence) name

new_val

New sequence (generator) value. A 64-bit integer from -2-63 to 263-1.

The SET GENERATOR statement sets the current value of a sequence or generator to the specified value.

Although SET GENERATOR is considered outdated, it is retained for backward compatibility. Use of the standards-compliant ALTER SEQUENCE is recommended.

Who Can Use a SET GENERATOR?

The SET GENERATOR statement can be executed by:

  • Administrators

  • The owner of the sequence (generator)

  • Users with the ALTER ANY SEQUENCE (ALTER ANY GENERATOR) privilege

Example of SET GENERATOR
Setting the value of the EMP_NO_GEN sequence to 145:
SET GENERATOR EMP_NO_GEN TO 145;

The same can be done with ALTER SEQUENCE:

ALTER SEQUENCE EMP_NO_GEN RESTART WITH 145;

5.15. EXCEPTION

This section describes how to create, modify and delete custom exceptions for use in error handlers in PSQL modules.

5.15.1. CREATE EXCEPTION

Used for

Creating a new exception for use in PSQL modules

Available in

DSQL, ESQL

Syntax
CREATE EXCEPTION exception_name '<message>'

<message> ::= <message-part> [<message-part> ...]

<message-part> ::=
    <text>
  | @<slot>

<slot> ::= one of 1..9
Table 61. CREATE EXCEPTION Statement Parameters
Parameter Description

exception_name

Exception name. The maximum length is 31 characters

message

Default error message. The maximum length is 1,021 characters

text

Text of any character

slot

Slot number of a parameter. Numbering starts at 1. Maximum slot number is 9.

The statement CREATE EXCEPTION creates a new exception for use in PSQL modules. If an exception with the same name exists, the statement will fail with an appropriate error message.

The exception name is a standard identifier. In a Dialect 3 database, it can be enclosed in double quotes to make it case-sensitive and, if required, to use characters that are not valid in regular identifiers. See Identifiers for more information.

The default message is stored in character set NONE, i.e. in characters of any single-byte character set. The text can be overridden in the PSQL code when the exception is thrown.

The error message may contain “parameter slots” that can be filled when raising the exception.

If the message contains a parameter slot number that is greater than 9, the second and subsequent digits will be treated as literal text. For example @10 will be interpreted as slot 1 followed by a literal ‘0’.

Custom exceptions are stored in the system table RDB$EXCEPTIONS.

Who Can Create an Exception

The CREATE EXCEPTION statement can be executed by:

The user executing the CREATE EXCEPTION statement becomes the owner of the exception.

CREATE EXCEPTION Examples
Creating an exception named E_LARGE_VALUE
CREATE EXCEPTION E_LARGE_VALUE
  'The value is out of range';
Creating a parameterized exception E_INVALID_VALUE
CREATE EXCEPTION E_INVALID_VALUE
  'Invalid value @1 for field @2';
Tips

Grouping CREATE EXCEPTION statements together in system update scripts will simplify working with them and documenting them. A system of prefixes for naming and categorising groups of exceptions is recommended.

5.15.2. ALTER EXCEPTION

Used for

Modifying the message returned from a custom exception

Available in

DSQL, ESQL

Syntax
ALTER EXCEPTION exception_name '<message>'

!! See syntax of CREATE EXCEPTION for further rules !!

The statement ALTER EXCEPTION can be used at any time, to modify the default text of the message.

Who Can Alter an Exception

The ALTER EXCEPTION statement can be executed by:

  • Administrators

  • The owner of the exception

  • Users with the ALTER ANY EXCEPTION privilege

ALTER EXCEPTION Examples
Changing the default message for the exception E_LARGE_VALUE
ALTER EXCEPTION E_LARGE_VALUE
  'The value exceeds the prescribed limit of 32,765 bytes';

5.15.3. CREATE OR ALTER EXCEPTION

Used for

Modifying the message returned from a custom exception, if the exception exists; otherwise, creating a new exception

Available in

DSQL

Syntax
CREATE OR ALTER EXCEPTION exception_name '<message>'

!! See syntax of CREATE EXCEPTION for further rules !!

The statement CREATE OR ALTER EXCEPTION is used to create the specified exception if it does not exist, or to modify the text of the error message returned from it if it exists already. If an existing exception is altered by this statement, any existing dependencies will remain intact.

CREATE OR ALTER EXCEPTION Example
Changing the message for the exception E_LARGE_VALUE
CREATE OR ALTER EXCEPTION E_LARGE_VALUE
  'The value is higher than the permitted range 0 to 32,765';

5.15.4. DROP EXCEPTION

Used for

Deleting a custom exception

Available in

DSQL, ESQL

Syntax
DROP EXCEPTION exception_name
Table 62. DROP EXCEPTION Statement Parameter
Parameter Description

exception_name

Exception name

The statement DROP EXCEPTION is used to delete an exception. Any dependencies on the exception will cause the statement to fail, and the exception will not be deleted.

Who Can Drop an Exception

The DROP EXCEPTION statement can be executed by:

  • Administrators

  • The owner of the exception

  • Users with the DROP ANY EXCEPTION privilege

DROP EXCEPTION Examples
Dropping exception E_LARGE_VALUE
DROP EXCEPTION E_LARGE_VALUE;

5.15.5. RECREATE EXCEPTION

Used for

Creating a new custom exception or recreating an existing one

Available in

DSQL

Syntax
RECREATE EXCEPTION exception_name '<message>'

!! See syntax of CREATE EXCEPTION for further rules !!

The statement RECREATE EXCEPTION creates a new exception for use in PSQL modules. If an exception with the same name exists already, the RECREATE EXCEPTION statement will try to drop it and create a new one. If there are any dependencies on the existing exception, the attempted deletion fails and RECREATE EXCEPTION is not executed.

RECREATE EXCEPTION Example
Recreating the E_LARGE_VALUE exception
RECREATE EXCEPTION E_LARGE_VALUE
  'The value exceeds its limit';

5.16. COLLATION

In SQL, text strings are sortable objects. This means that they obey ordering rules, such as alphabetical order. Comparison operations can be applied to such text strings (for example, “less than” or “greater than”), where the comparison must apply a certain sort order or collation. For example, the expression “'a' < 'b'” means that ‘'a'’ precedes ‘'b'’ in the collation. The expression “'c' > 'b'” means that ‘'c'’ follows ‘'b'’ in the collation. Text strings of more than one character are sorted using sequential character comparisons: first the first characters of the two strings are compared, then the second characters, and so on, until a difference is found between the two strings. This difference defines the sort order.

A COLLATION is the schema object that defines a collation (or sort order).

5.16.1. CREATE COLLATION

Used for

Creating a new collation for a supported character set available to the database

Available in

DSQL

Syntax
CREATE COLLATION collname
    FOR charset
    [FROM {basecoll | EXTERNAL ('extname')}]
    [NO PAD | PAD SPACE]
    [CASE [IN]SENSITIVE]
    [ACCENT [IN]SENSITIVE]
    ['<specific-attributes>']

<specific-attributes> ::= <attribute> [; <attribute> ...]

<attribute> ::= attrname=attrvalue
Table 63. CREATE COLLATION Statement Parameters
Parameter Description

collname

The name to use for the new collation. The maximum length is 31 characters

charset

A character set present in the database

basecoll

A collation already present in the database

extname

The collation name used in the .conf file

The CREATE COLLATION statement does not “create” anything, its purpose is to make a collation known to a database. The collation must already be present on the system, typically in a library file, and must be properly registered in a .conf file in the intl subdirectory of the Firebird installation.

The collation may alternatively be based on one that is already present in the database.

How the Engine Detects the Collation

The optional FROM clause specifies the base collation that is used to derive a new collation. This collation must already be present in the database. If the keyword EXTERNAL is specified, then Firebird will scan the .conf files in $fbroot/intl/, where extname must exactly match the name in the configuration file (case-sensitive).

If no FROM clause is present, Firebird will scan the .conf file(s) in the intl subdirectory for a collation with the collation name specified in CREATE COLLATION. In other words, omitting the FROM basecoll clause is equivalent to specifying FROM EXTERNAL ('collname').

The — single-quoted — extname is case-sensitive and must correspond exactly with the collation name in the .conf file. The collname, charset and basecoll parameters are case-insensitive unless enclosed in double-quotes.

When creating a collation, you can specify whether trailing spaces are included in the comparison. If the NO PAD clause is specified, trailing spaces are taken into account in the comparison. If the PAD SPACE clause is specified, trailing spaces are ignored in the comparison.

The optional CASE clause allows you to specify whether the comparison is case-sensitive or case-insensitive.

The optional ACCENT clause allows you to specify whether the comparison is accent-sensitive or accent-insensitive (e.g. if ‘'e'’ and ‘'é'’ are considered equal or unequal).

Specific Attributes

The CREATE COLLATION statement can also include specific attributes to configure the collation. The available specific attributes are listed in the table below. Not all specific attributes apply to every collation. If the attribute is not applicable to the collation, but is specified when creating it, it will not cause an error.

Specific attribute names are case sensitive.

In the table, “1 bpc” indicates that an attribute is valid for collations of character sets using 1 byte per character (so-called narrow character sets), and “UNI” for “Unicode collations”.

Table 64. Specific Collation Attributes
Atrribute Values Valid for Comment

DISABLE-COMPRESSIONS

0, 1

1 bpc

Disables compressions (a.k.a. contractions). Compressions cause certain character sequences to be sorted as atomic units, e.g. Spanish c+h as a single character ch

DISABLE-EXPANSIONS

0, 1

1 bpc

Disables expansions. Expansions cause certain characters (e.g. ligatures or umlauted vowels) to be treated as character sequences and sorted accordingly

ICU-VERSION

default or M.m

UNI

Specifies the ICU library version to use. Valid values are the ones defined in the applicable <intl_module> element in intl/fbintl.conf. Format: either the string literal “default” or a major+minor version number like “3.0” (both unquoted).

LOCALE

xx_YY

UNI

Specifies the collation locale. Requires complete version of ICU libraries. Format: a locale string like “du_NL” (unquoted)

MULTI-LEVEL

0, 1

1 bpc

Uses more than one ordering level

NUMERIC-SORT

0, 1

UNI

Treats contiguous groups of decimal digits in the string as atomic units and sorts them numerically. (This is also known as natural sorting)

SPECIALS-FIRST

0, 1

1 bpc

Orders special characters (spaces, symbols etc.) before alphanumeric characters

If you want to add a new character set with its default collation into your database, declare and run the stored procedure sp_register_character_set(name, max_bytes_per_character), found in misc/intl.sql under the Firebird installation directory.

In order for this to work, the character set must be present on the system and registered in a .conf file in the intl subdirectory.

Who Can Create a Collation

The CREATE COLLATION statement can be executed by:

The user executing the CREATE COLLATION statement becomes the owner of the collation.

Examples using CREATE COLLATION
  1. Creating a collation using the name found in the fbintl.conf file (case-sensitive)

    CREATE COLLATION ISO8859_1_UNICODE FOR ISO8859_1;
  2. Creating a collation using a special (user-defined) name (the “external” name must completely match the name in the fbintl.conf file)

    CREATE COLLATION LAT_UNI
      FOR ISO8859_1
      FROM EXTERNAL ('ISO8859_1_UNICODE');
  3. Creating a case-insensitive collation based on one already existing in the database

    CREATE COLLATION ES_ES_NOPAD_CI
      FOR ISO8859_1
      FROM ES_ES
      NO PAD
      CASE INSENSITIVE;
  4. Creating a case-insensitive collation based on one already existing in the database with specific attributes

    CREATE COLLATION ES_ES_CI_COMPR
      FOR ISO8859_1
      FROM ES_ES
      CASE INSENSITIVE
      'DISABLE-COMPRESSIONS=0';
  5. Creating a case-insensitive collation by the value of numbers (the so-called natural collation)

    CREATE COLLATION nums_coll FOR UTF8
      FROM UNICODE
      CASE INSENSITIVE 'NUMERIC-SORT=1';
    
    CREATE DOMAIN dm_nums AS varchar(20)
      CHARACTER SET UTF8 COLLATE nums_coll; -- original (manufacturer) numbers
    
    CREATE TABLE wares(id int primary key, articul dm_nums ...);
See also

DROP COLLATION

5.16.2. DROP COLLATION

Used for

Removing a collation from the database

Available in

DSQL

Syntax
DROP COLLATION collname
Table 65. DROP COLLATION Statement Parameters
Parameter Description

collname

The name of the collation

The DROP COLLATION statement removes the specified collation from the database, if it exists. An error will be raised if the specified collation is not present.

If you want to remove an entire character set with all its collations from the database, declare and execute the stored procedure sp_unregister_character_set(name) from the misc/intl.sql subdirectory of the Firebird installation.

Who Can Drop a Collation

The Drop COLLATION statement can be executed by:

  • Administrators

  • The owner of the collation

  • Users with the DROP ANY COLLATION privilege

Example using DROP COLLATION
Deleting the ES_ES_NOPAD_CI collation.
DROP COLLATION ES_ES_NOPAD_CI;
See also

CREATE COLLATION

5.17. CHARACTER SET

5.17.1. ALTER CHARACTER SET

Used for

Setting the default collation for a character set

Available in

DSQL

Syntax
ALTER CHARACTER SET charset
  SET DEFAULT COLLATION collation
Table 66. ALTER CHARACTER SET Statement Parameters
Parameter Description

charset

Character set identifier

collation

The name of the collation

The ALTER CHARACTER SET statement changes the default collation for the specified character set. It will affect the future usage of the character set, except for cases where the COLLATE clause is explicitly overridden. In that case, the collation sequence of existing domains, columns and PSQL variables will remain intact after the change to the default collation of the underlying character set.

If you change the default collation for the database character set (the one defined when the database was created), it will change the default collation for the database.

If you change the default collation for the character set that was specified during the connection, string constants will be interpreted according to the new collation value, except in those cases where the character set and/or the collation have been overridden.

Who Can Alter a Character Set

The ALTER CHARACTER SET statement can be executed by:

ALTER CHARACTER SET Example
Setting the default UNICODE_CI_AI collation for the UTF8 encoding
ALTER CHARACTER SET UTF8
  SET DEFAULT COLLATION UNICODE_CI_AI;

5.18. Comments

Database objects and a database itself may be annotated with comments. It is a convenient mechanism for documenting the development and maintenance of a database. Comments created with COMMENT ON will survive a gbak backup and restore.

5.18.1. COMMENT ON

Used for

Documenting metadata

Available in

DSQL

Syntax
COMMENT ON <object> IS {'sometext' | NULL}

<object> ::=
    {DATABASE | SCHEMA}
  | <basic-type> objectname
  | COLUMN relationname.fieldname
  | [{PROCEDURE | FUNCTION}] PARAMETER
      [packagename.]routinename.paramname
  | {PROCEDURE | [EXTERNAL] FUNCTION}
      [package_name.]routinename

<basic-type> ::=
    CHARACTER SET | COLLATION | DOMAIN
  | EXCEPTION     | FILTER    | GENERATOR
  | INDEX         | PACKAGE   | ROLE
  | SEQUENCE      | TABLE     | TRIGGER
  | USER          | VIEW
Table 67. COMMENT ON Statement Parameters
Parameter Description

sometext

Comment text

basic-type

Metadata object type

objectname

Metadata object name

relationname

Name of table or view

fieldname

Name of the column

package_name

Name of the package

routinename

Name of stored procedure or function

paramname

Name of a stored procedure or function parameter

The COMMENT ON statement adds comments for database objects (metadata). Comments are saved to the RDB$DESCRIPTION column of the corresponding system tables. Client applications can view comments from these fields.

  1. If you add an empty comment (“''”), it will be saved as NULL in the database.

  2. The COMMENT ON USER statement will only create comments on users managed by the default usermanager (the first plugin listed in the UserManager config option). See also CORE-6479.

  3. Comments on users are stored in the security database.

  4. SCHEMA is currently a synonym for DATABASE; this may change in a future version, so we recommend to always use DATABASE

Comments on users are visible to that user through the SEC$USERS virtual table.

Who Can Add a Comment

The COMMENT ON statement can be executed by:

  • Administrators

  • The owner of the object that is commented on

  • Users with the ALTER ANY object_type privilege, where object_type is the type of object commented on (e.g. PROCEDURE)

Examples using COMMENT ON
  1. Adding a comment for the current database

    COMMENT ON DATABASE IS 'It is a test (''my.fdb'') database';
  2. Adding a comment for the METALS table

    COMMENT ON TABLE METALS IS 'Metal directory';
  3. Adding a comment for the ISALLOY field in the METALS table

    COMMENT ON COLUMN METALS.ISALLOY IS '0 = fine metal, 1 = alloy';
  4. Adding a comment for a parameter

    COMMENT ON PARAMETER ADD_EMP_PROJ.EMP_NO IS 'Employee ID';
  5. Adding a comment for a package, its procedures and functions, and their parameters

    COMMENT ON PACKAGE APP_VAR IS 'Application Variables';
    
    COMMENT ON FUNCTION APP_VAR.GET_DATEBEGIN
    IS 'Returns the start date of the period';
    
    COMMENT ON PROCEDURE APP_VAR.SET_DATERANGE
    IS 'Set date range';
    
    COMMENT ON
    PROCEDURE PARAMETER APP_VAR.SET_DATERANGE.ADATEBEGIN
    IS 'Start Date';

6. Data Manipulation (DML) Statements

DML — data manipulation language — is the subset of SQL that is used by applications and procedural modules to extract and change data. Extraction, for the purpose of reading data, both raw and manipulated, is achieved with the SELECT statement. INSERT is for adding new data and DELETE is for erasing data that are no longer required. UPDATE, MERGE and UPDATE OR INSERT all modify data in various ways.

6.1. SELECT

Used for

Retrieving data

Available in

DSQL, ESQL, PSQL

Global syntax
[WITH [RECURSIVE] <cte> [, <cte> ...]]
SELECT
  [FIRST m] [SKIP n]
  [{DISTINCT | ALL}] <columns>
FROM
  <source> [[AS] alias]
  [<joins>]
[WHERE <condition>]
[GROUP BY <grouping-list>
[HAVING <aggregate-condition>]]
[PLAN <plan-expr>]
[UNION [{DISTINCT | ALL}] <other-select>]
[ORDER BY <ordering-list>]
[{ ROWS <m> [TO <n>]
 | [OFFSET n {ROW | ROWS}]
   [FETCH {FIRST | NEXT} [m] {ROW | ROWS} ONLY]
}]
[FOR UPDATE [OF <columns>]]
[WITH LOCK]
[INTO <variables>]

<variables> ::= [:]varname [, [:]varname ...]

The SELECT statement retrieves data from the database and hands them to the application or the enclosing SQL statement. Data are returned in zero or more rows, each containing one or more columns or fields. The total of rows returned is the result set of the statement.

The only mandatory parts of the SELECT statement are:

  • The SELECT keyword, followed by a columns list. This part specifies what you want to retrieve.

  • The FROM keyword, followed by a selectable object. This tells the engine where you want to get it from.

In its most basic form, SELECT retrieves a number of columns from a single table or view, like this:

select id, name, address
  from contacts

Or, to retrieve all the columns:

select * from sales

In practice, a SELECT statement is usually executed with a WHERE clause, which limits the rows returned. The result set may be sorted by an ORDER BY clause, and FIRST …​ SKIP, OFFSET …​ FETCH or ROWS may further limit the number of returned rows, and can - for example - be used for pagination.

The column list may contain all kinds of expressions instead of just column names, and the source need not be a table or view: it may also be a derived table, a common table expression (CTE) or a selectable stored procedure (SP). Multiple sources may be combined in a JOIN, and multiple result sets may be combined in a UNION.

The following sections discuss the available SELECT subclauses and their usage in detail.

6.1.1. FIRST, SKIP

Used for

Retrieving a slice of rows from an ordered set

Available in

DSQL, PSQL

Syntax
SELECT
  [FIRST <m>] [SKIP <n>]
  FROM ...
  ...

<m>, <n>  ::=
    <integer-literal>
  | <query-parameter>
  | (<integer-expression>)
Table 68. Arguments for the FIRST and SKIP Clauses
Argument Description

integer-literal

Integer literal

query-parameter

Query parameter place-holder. ? in DSQL and :paramname in PSQL

integer-expression

Expression returning an integer value

FIRST and SKIP are non-standard syntax

FIRST and SKIP are Firebird-specific clauses. Use the SQL-standard OFFSET, FETCH syntax wherever possible.

FIRST limits the output of a query to the first m rows. SKIP will suppress the given n rows before starting to return output.

FIRST and SKIP are both optional. When used together as in “FIRST m SKIP n”, the n topmost rows of the output set are discarded, and the first m rows of the rest of the set are returned.

Characteristics of FIRST and SKIP
  • Any argument to FIRST and SKIP that is not an integer literal or an SQL parameter must be enclosed in parentheses. This implies that a subquery expression must be enclosed in two pairs of parentheses.

  • SKIP 0 is allowed but totally pointless.

  • FIRST 0 is also allowed and returns an empty set.

  • Negative SKIP and/or FIRST values result in an error.

  • If a SKIP lands past the end of the dataset, an empty set is returned.

  • If the number of rows in the dataset (or the remainder left after a SKIP) is less than the value of the m argument supplied for FIRST, that smaller number of rows is returned. These are valid results, not error conditions.

Examples of FIRST/SKIP
  1. The following query will return the first 10 names from the People table:

    select first 10 id, name from People
      order by name asc
  2. The following query will return everything but the first 10 names:

    select skip 10 id, name from People
      order by name asc
  3. And this one returns the last 10 rows. Notice the double parentheses:

    select skip ((select count(*) - 10 from People))
      id, name from People
      order by name asc
  4. This query returns rows 81 to 100 of the People table:

    select first 20 skip 80 id, name from People
      order by name asc
See also

OFFSET, FETCH, ROWS

6.1.2. The SELECT Columns List

The columns list contains one or more comma-separated value expressions. Each expression provides a value for one output column. Alternatively, * (“select star” or “select all”) can be used to stand for all the columns in a relation (i.e. a table, view or selectable stored procedure).

Syntax
SELECT
  [...]
  [{DISTINCT | ALL}] <output-column> [, <output-column> ...]
  [...]
  FROM ...

<output-column> ::=
  { [<qualifier>.]*
  | <value-expression> [COLLATE collation] [[AS] alias] }

<value-expression> ::=
  { [<qualifier>.]table-column
  | [<qualifier>.]view-column
  | [<qualifier>.]selectable-SP-outparm
  | <literal>
  | <context-variable>
  | <function-call>
  | <single-value-subselect>
  | <CASE-construct>
  | any other expression returning a single
    value of a Firebird data type or NULL }

<qualifier> ::= a relation name or alias
Table 69. Arguments for the SELECT Columns List
Argument Description

qualifier

Name of relation (view, stored procedure, derived table); or an alias for it

collation

Only for character-type columns: a collation name that exists and is valid for the character set of the data

alias

Column or field alias

table-column

Name of a table column

view-column

Name of a view column

selectable-SP-outparm

Declared name of an output parameter of a selectable stored procedure

constant

A constant

context-variable

Context variable

function-call

Scalar, aggregate, or window function expression

single-value-subselect

A subquery returning one scalar value (singleton)

CASE-construct

CASE construct setting conditions for a return value

other-single-value-expr

Any other expression returning a single value of a Firebird data type; or NULL

It is always valid to qualify a column name (or “*”) with the name or alias of the table, view or selectable SP to which it belongs, followed by a dot (‘.’). For example, relationname.columnname, relationname.*, alias.columnname, alias.*. Qualifying is required if the column name occurs in more than one relation taking part in a join. Qualifying “*” is always mandatory if it is not the only item in the column list.

Aliases hide the original relation name: once a table, view or procedure has been aliased, only the alias can be used as its qualifier throughout the query. The relation name itself becomes unavailable.

The column list may optionally be preceded by one of the keywords DISTINCT or ALL:

  • DISTINCT filters out any duplicate rows. That is, if two or more rows have the same values in every corresponding column, only one of them is included in the result set

  • ALL is the default: it returns all of the rows, including duplicates. ALL is rarely used; it is supported for compliance with the SQL standard.

A COLLATE clause will not change the appearance of the column as such. However, if the specified collation changes the case or accent sensitivity of the column, it may influence:

  • The ordering, if an ORDER BY clause is also present and it involves that column

  • Grouping, if the column is part of a GROUP BY clause

  • The rows retrieved (and hence the total number of rows in the result set), if DISTINCT is used

Examples of SELECT queries with different types of column lists

A simple SELECT using only column names:

select cust_id, cust_name, phone
  from customers
  where city = 'London'

A query featuring a concatenation expression and a function call in the columns list:

select 'Mr./Mrs. ' || lastname, street, zip, upper(city)
  from contacts
  where date_last_purchase(id) = current_date

A query with two subselects:

select p.fullname,
  (select name from classes c where c.id = p.class) as class,
  (select name from mentors m where m.id = p.mentor) as mentor
from pupils p

The following query accomplishes the same as the previous one using joins instead of subselects:

select p.fullname,
  c.name as class,
  m.name as mentor
  join classes c on c.id = p.class
from pupils p
  join mentors m on m.id = p.mentor

This query uses a CASE construct to determine the correct title, e.g. when sending mail to a person:

select case upper(sex)
    when 'F' then 'Mrs.'
    when 'M' then 'Mr.'
    else ''
  end as title,
  lastname,
  address
from employees

Query using a window function. Ranks employees by salary.

SELECT
  id,
  salary,
  name ,
  DENSE_RANK() OVER (ORDER BY salary) AS EMP_RANK
FROM employees
ORDER BY salary;

Querying a selectable stored procedure:

select * from interesting_transactions(2010, 3, 'S')
  order by amount

Selecting from columns of a derived table. A derived table is a parenthesized SELECT statement whose result set is used in an enclosing query as if it were a regular table or view. The derived table is shown in bold here:

select fieldcount,
  count(relation) as num_tables
from (select r.rdb$relation_name as relation,
        count(*) as fieldcount
      from rdb$relations r
        join rdb$relation_fields rf
          on rf.rdb$relation_name = r.rdb$relation_name
      group by relation)
group by fieldcount

Asking the time through a context variable (CURRENT_TIME):

select current_time from rdb$database

For those not familiar with RDB$DATABASE: this is a system table that is present in all Firebird databases and is guaranteed to contain exactly one row. Although it wasn’t created for this purpose, it has become standard practice among Firebird programmers to select from this table if you want to select “from nothing”, i.e. if you need data that are not bound to a table or view, but can be derived from the expressions in the output columns alone. Another example is:

select power(12, 2) as twelve_squared, power(12, 3) as twelve_cubed
  from rdb$database

Finally, an example where you select some meaningful information from RDB$DATABASE itself:

select rdb$character_set_name from rdb$database

As you may have guessed, this will give you the default character set of the database.

6.1.3. The FROM clause

The FROM clause specifies the source(s) from which the data are to be retrieved. In its simplest form, this is just a single table or view. However, the source can also be a selectable stored procedure, a derived table, or a common table expression. Multiple sources can be combined using various types of joins.

This section focuses on single-source selects. Joins are discussed in a following section.

Syntax
SELECT
  ...
  FROM <source>
  [<joins>]
  [...]

<source> ::=
    table [[AS] alias]
  | selectable-stored-procedure [(<args>)] [[AS] alias]
  | <derived-table>

<derived-table> ::=
  (<select-statement>) [[AS] alias] [(<column-aliases>)]

<column-aliases> ::= column-alias [, column-alias ...]
Table 70. Arguments for the FROM Clause
Argument Description

table

Name of a table, view or CTE

selectable-stored-procedure

Name of a selectable stored procedure

args

Selectable stored procedure arguments

derived-table

Derived table query expression

select-statement

Any SELECT statement

column-aliases

Alias for a column in a relation, CTE or derived table

alias

The alias of a data source (table, view, procedure, CTE, derived table)

Selecting FROM a table or view

When selecting from a single table or view, the FROM clause requires nothing more than the name. An alias may be useful or even necessary if there are subqueries that refer to the main select statement (as they often do — subqueries like this are called correlated subqueries).

Examples
select id, name, sex, age from actors
where state = 'Ohio'
select * from birds
where type = 'flightless'
order by family, genus, species
select firstname,
  middlename,
  lastname,
  date_of_birth,
  (select name from schools s where p.school = s.id) schoolname
from pupils p
where year_started = '2012'
order by schoolname, date_of_birth
Never mix column names with column aliases!

If you specify an alias for a table or a view, you must always use this alias in place of the table name whenever you query the columns of the relation (and wherever else you make a reference to columns, such as ORDER BY, GROUP BY and WHERE clauses).

Correct use:

SELECT PEARS
FROM FRUIT;

SELECT FRUIT.PEARS
FROM FRUIT;

SELECT PEARS
FROM FRUIT F;

SELECT F.PEARS
FROM FRUIT F;

Incorrect use:

SELECT FRUIT.PEARS
FROM FRUIT F;
Selecting FROM a stored procedure

A selectable stored procedure is a procedure that:

  • contains at least one output parameter, and

  • utilizes the SUSPEND keyword so the caller can fetch the output rows one by one, just as when selecting from a table or view.

The output parameters of a selectable stored procedure correspond to the columns of a regular table.

Selecting from a stored procedure without input parameters is just like selecting from a table or view:

select * from suspicious_transactions
  where assignee = 'John'

Any required input parameters must be specified after the procedure name, enclosed in parentheses:

select name, az, alt from visible_stars('Brugge', current_date, '22:30')
  where alt >= 20
  order by az, alt

Values for optional parameters (that is, parameters for which default values have been defined) may be omitted or provided. However, if you provide them only partly, the parameters you omit must all be at the tail end.

Supposing that the procedure visible_stars from the previous example has two optional parameters: min_magn (numeric(3,1)) and spectral_class (varchar(12)), the following queries are all valid:

select name, az, alt
from visible_stars('Brugge', current_date, '22:30');

select name, az, alt
from visible_stars('Brugge', current_date, '22:30', 4.0);

select name, az, alt
from visible_stars('Brugge', current_date, '22:30', 4.0, 'G');

But this one isn’t, because there’s a “hole” in the parameter list:

select name, az, alt
from visible_stars('Brugge', current_date, '22:30', 'G');

An alias for a selectable stored procedure is specified after the parameter list:

select
  number,
  (select name from contestants c where c.number = gw.number)
from get_winners('#34517', 'AMS') gw

If you refer to an output parameter (“column”) by qualifying it with the full procedure name, the procedure alias should be omitted:

select
  number,
  (select name from contestants c where c.number = get_winners.number)
from get_winners('#34517', 'AMS')
Selecting FROM a derived table

A derived table is a valid SELECT statement enclosed in parentheses, optionally followed by a table alias and/or column aliases. The result set of the statement acts as a virtual table which the enclosing statement can query.

Syntax
(<select-query>)
  [[AS] derived-table-alias]
  [(<derived-column-aliases>)]

<derived-column-aliases> := column-alias [, column-alias ...]

The set returned data set by this “SELECT FROM (SELECT FROM..)” style of statement is a virtual table that can be queried within the enclosing statement, as if it were a regular table or view.

Example using a derived table

The derived table in the query below returns the list of table names in the database, and the number of columns in each table. A “drill-down” query on the derived table returns the counts of fields and the counts of tables having each field count:

SELECT
  FIELDCOUNT,
  COUNT(RELATION) AS NUM_TABLES
FROM (SELECT
        R.RDB$RELATION_NAME RELATION,
        COUNT(*) AS FIELDCOUNT
      FROM RDB$RELATIONS R
        JOIN RDB$RELATION_FIELDS RF
        ON RF.RDB$RELATION_NAME = R.RDB$RELATION_NAME
        GROUP BY RELATION)
GROUP BY FIELDCOUNT

A trivial example demonstrating how the alias of a derived table and the list of column aliases (both optional) can be used:

SELECT
  DBINFO.DESCR, DBINFO.DEF_CHARSET
FROM (SELECT *
      FROM RDB$DATABASE) DBINFO
        (DESCR, REL_ID, SEC_CLASS, DEF_CHARSET)
More about Derived Tables

Derived tables can

  • be nested

  • be unions, and can be used in unions

  • contain aggregate functions, subqueries and joins

  • be used in aggregate functions, subqueries and joins

  • be calls to selectable stored procedures or queries to them

  • have WHERE, ORDER BY and GROUP BY clauses, FIRST/SKIP or ROWS directives, et al.

Furthermore,

  • Each column in a derived table must have a name. If it does not have a name, such as when it is a constant or a run-time expression, it should be given an alias, either in the regular way or by including it in the list of column aliases in the derived table’s specification.

    • The list of column aliases is optional but, if it exists, it must contain an alias for every column in the derived table

  • The optimizer can process derived tables very effectively. However, if a derived table is included in an inner join and contains a subquery, the optimizer will be unable to use any join order.

A more useful example

Suppose we have a table COEFFS which contains the coefficients of a number of quadratic equations we have to solve. It has been defined like this:

create table coeffs (
  a double precision not null,
  b double precision not null,
  c double precision not null,
  constraint chk_a_not_zero check (a <> 0)
)

Depending on the values of a, b and c, each equation may have zero, one or two solutions. It is possible to find these solutions with a single-level query on table COEFFS, but the code will look rather messy and several values (like the discriminant) will have to be calculated multiple times per row. A derived table can help keep things clean here:

select
  iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
  iif (D >  0, (-b + sqrt(D)) / denom, null) sol_2
  from
    (select b, b*b - 4*a*c, 2*a from coeffs) (b, D, denom)

If we want to show the coefficients next to the solutions (which may not be a bad idea), we can alter the query like this:

select
  a, b, c,
  iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
  iif (D >  0, (-b + sqrt(D)) / denom, null) sol_2
  from
    (select a, b, c, b*b - 4*a*c as D, 2*a as denom
     from coeffs)

Notice that whereas the first query used a column aliases list for the derived table, the second adds aliases internally where needed. Both methods work, as long as every column is guaranteed to have a name.

All columns in the derived table will be evaluated as many times as they are specified in the main query. This is important, as it can lead to unexpected results when using non-deterministic functions. The following shows an example of this.

SELECT
  UUID_TO_CHAR(X) AS C1,
  UUID_TO_CHAR(X) AS C2,
  UUID_TO_CHAR(X) AS C3
FROM (SELECT GEN_UUID() AS X
      FROM RDB$DATABASE) T;

The result if this query produces three different values:

C1  80AAECED-65CD-4C2F-90AB-5D548C3C7279
C2  C1214CD3-423C-406D-B5BD-95BF432ED3E3
C3  EB176C10-F754-4689-8B84-64B666381154

To ensure a single result of the GEN_UUID function, you can use the following method:

SELECT
  UUID_TO_CHAR(X) AS C1,
  UUID_TO_CHAR(X) AS C2,
  UUID_TO_CHAR(X) AS C3
FROM (SELECT GEN_UUID() AS X
      FROM RDB$DATABASE
      UNION ALL
      SELECT NULL FROM RDB$DATABASE WHERE 1 = 0) T;

This query produces a single result for all three columns:

C1  80AAECED-65CD-4C2F-90AB-5D548C3C7279
C2  80AAECED-65CD-4C2F-90AB-5D548C3C7279
C3  80AAECED-65CD-4C2F-90AB-5D548C3C7279

An alternative solution is to wrap the GEN_UUID query in a subquery:

SELECT
  UUID_TO_CHAR(X) AS C1,
  UUID_TO_CHAR(X) AS C2,
  UUID_TO_CHAR(X) AS C3
FROM (SELECT
        (SELECT GEN_UUID() FROM RDB$DATABASE) AS X
      FROM RDB$DATABASE) T;

This is an artifact of the current implementation. This behaviour may change in a future Firebird version.

Selecting FROM a Common Table Expression (CTE)

A common table expression — or CTE — is a more complex variant of the derived table, but it is also more powerful. A preamble, starting with the keyword WITH, defines one or more named CTE's, each with an optional column aliases list. The main query, which follows the preamble, can then access these CTE's as if they were regular tables or views. The CTE's go out of scope once the main query has run to completion.

For a full discussion of CTE's, please refer to the section Common Table Expressions (“WITH …​ AS …​ SELECT”).

The following is a rewrite of our derived table example as a CTE:

with vars (b, D, denom) as (
  select b, b*b - 4*a*c, 2*a from coeffs
)
select
  iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
  iif (D >  0, (-b + sqrt(D)) / denom, null) sol_2
from vars

Except for the fact that the calculations that have to be made first are now at the beginning, this isn’t a great improvement over the derived table version. However, we can now also eliminate the double calculation of sqrt(D) for every row:

with vars (b, D, denom) as (
  select b, b*b - 4*a*c, 2*a from coeffs
),
vars2 (b, D, denom, sqrtD) as (
  select b, D, denom, iif (D >= 0, sqrt(D), null) from vars
)
select
  iif (D >= 0, (-b - sqrtD) / denom, null) sol_1,
  iif (D >  0, (-b + sqrtD) / denom, null) sol_2
from vars2

The code is a little more complicated now, but it might execute more efficiently (depending on what takes more time: executing the SQRT function or passing the values of b, D and denom through an extra CTE). Incidentally, we could have done the same with derived tables, but that would involve nesting.

All columns in the CTE will be evaluated as many times as they are specified in the main query. This is important, as it can lead to unexpected results when using non-deterministic functions. The following shows an example of this.

WITH T (X) AS (
  SELECT GEN_UUID()
  FROM RDB$DATABASE)
SELECT
  UUID_TO_CHAR(X) as c1,
  UUID_TO_CHAR(X) as c2,
  UUID_TO_CHAR(X) as c3
FROM T

The result if this query produces three different values:

C1  80AAECED-65CD-4C2F-90AB-5D548C3C7279
C2  C1214CD3-423C-406D-B5BD-95BF432ED3E3
C3  EB176C10-F754-4689-8B84-64B666381154

To ensure a single result of the GEN_UUID function, you can use the following method:

WITH T (X) AS (
  SELECT GEN_UUID()
  FROM RDB$DATABASE
  UNION ALL
  SELECT NULL FROM RDB$DATABASE WHERE 1 = 0)
SELECT
  UUID_TO_CHAR(X) as c1,
  UUID_TO_CHAR(X) as c2,
  UUID_TO_CHAR(X) as c3
FROM T;

This query produces a single result for all three columns:

C1  80AAECED-65CD-4C2F-90AB-5D548C3C7279
C2  80AAECED-65CD-4C2F-90AB-5D548C3C7279
C3  80AAECED-65CD-4C2F-90AB-5D548C3C7279

An alternative solution is to wrap the GEN_UUID query in a subquery:

WITH T (X) AS (
  SELECT (SELECT GEN_UUID() FROM RDB$DATABASE)
  FROM RDB$DATABASE)
SELECT
  UUID_TO_CHAR(X) as c1,
  UUID_TO_CHAR(X) as c2,
  UUID_TO_CHAR(X) as c3
FROM T;

This is an artifact of the current implementation. This behaviour may change in a future Firebird version.

6.1.4. Joins

Joins combine data from two sources into a single set. This is done on a row-by-row basis and usually involves checking a join condition in order to determine which rows should be merged and appear in the resulting dataset. There are several types (INNER, OUTER) and classes (qualified, natural, etc.) of joins, each with its own syntax and rules.

Since joins can be chained, the datasets involved in a join may themselves be joined sets.

Syntax
SELECT
   ...
   FROM <source>
   [<joins>]
   [...]

<source> ::=
    table [[AS] alias]
  | selectable-stored-procedure [(<args>)] [[AS] alias]
  | <derived-table>

<joins> ::= <join> [<join> ...]

<join> ::=
    [<join-type>] JOIN <source> <join-condition>
  | NATURAL [<join-type>] JOIN <source>
  | {CROSS JOIN | ,} <source>

<join-type> ::= INNER | {LEFT | RIGHT | FULL} [OUTER]

<join-condition> ::= ON <condition> | USING (<column-list>)
Table 71. Arguments for JOIN Clauses
Argument Description

table

Name of a table, view or CTE

selectable-stored-procedure

Name of a selectable stored procedure

args

Selectable stored procedure input parameter(s)

derived-table

Derived table query expression

alias

An alias for a data source (table, view, procedure, CTE, derived table)

condition

Join condition (criterion)

column-list

The list of columns used for an equi-join

Inner vs. Outer Joins

A join always combines data rows from two sets (usually referred to as the left set and the right set). By default, only rows that meet the join condition (i.e. that match at least one row in the other set when the join condition is applied) make it into the result set. This default type of join is called an inner join. Suppose we have the following two tables:

Table A
ID S

87

Just some text

235

Silence

Table B
CODE X

-23

56.7735

87

416.0

If we join these tables like this:

select *
  from A
  join B on A.id = B.code;

then the result set will be:

ID S CODE X

87

Just some text

87

416.0

The first row of A has been joined with the second row of B because together they met the condition “A.id = B.code”. The other rows from the source tables have no match in the opposite set and are therefore not included in the join. Remember, this is an INNER join. We can make that fact explicit by writing:

select *
  from A
  inner join B on A.id = B.code;

However, since INNER is the default, it is usually ommitted.

It is perfectly possible that a row in the left set matches several rows from the right set or vice versa. In that case, all those combinations are included, and we can get results like:

ID S CODE X

87

Just some text

87

416.0

87

Just some text

87

-1.0

-23

Don’t know

-23

56.7735

-23

Still don’t know

-23

56.7735

-23

I give up

-23

56.7735

Sometimes we want (or need) all the rows of one or both of the sources to appear in the joined set, regardless of whether they match a record in the other source. This is where outer joins come in. A LEFT outer join includes all the records from the left set, but only matching records from the right set. In a RIGHT outer join it’s the other way around. FULL outer joins include all the records from both sets. In all outer joins, the “holes” (the places where an included source record doesn’t have a match in the other set) are filled up with NULLs.

In order to make an outer join, you must specify LEFT, RIGHT or FULL, optionally followed by the keyword OUTER.

Below are the results of the various outer joins when applied to our original tables A and B:

select *
  from A
  left [outer] join B on A.id = B.code;
ID S CODE X

87

Just some text

87

416.0

235

Silence

<null>

<null>

select *
  from A
  right [outer] join B on A.id = B.code
ID S CODE X

<null>

<null>

-23

56.7735

87

Just some text

87

416.0

select *
  from A
  full [outer] join B on A.id = B.code
ID S CODE X

<null>

<null>

-23

56.7735

87

Just some text

87

416.0

235

Silence

<null>

<null>

Qualified joins

Qualified joins specify conditions for the combining of rows. This happens either explicitly in an ON clause or implicitly in a USING clause.

Syntax
<qualified-join> ::= [<join-type>] JOIN <source> <join-condition>

<join-type> ::= INNER | {LEFT | RIGHT | FULL} [OUTER]

<join-condition> ::= ON <condition> | USING (<column-list>)
Explicit-condition joins

Most qualified joins have an ON clause, with an explicit condition that can be any valid Boolean expression, but usually involves some comparison between the two sources involved.

Quite often, the condition is an equality test (or a number of ANDed equality tests) using the “=” operator. Joins like these are called equi-joins. (The examples in the section on inner and outer joins were al equi-joins.)

Examples of joins with an explicit condition:

/* Select all Detroit customers who made a purchase
   in 2013, along with the purchase details: */
select * from customers c
  join sales s on s.cust_id = c.id
  where c.city = 'Detroit' and s.year = 2013;
/* Same as above, but include non-buying customers: */
select * from customers c
  left join sales s on s.cust_id = c.id
  where c.city = 'Detroit' and s.year = 2013;
/* For each man, select the women who are taller than he.
   Men for whom no such woman exists are not included. */
select m.fullname as man, f.fullname as woman
  from males m
  join females f on f.height > m.height;
/* Select all pupils with their class and mentor.
   Pupils without a mentor are also included.
   Pupils without a class are not included. */
select p.firstname, p.middlename, p.lastname,
       c.name, m.name
  from pupils p
  join classes c on c.id = p.class
  left join mentors m on m.id = p.mentor;
Named columns joins

Equi-joins often compare columns that have the same name in both tables. If this is the case, we can also use the second type of qualified join: the named columns join.

Named columns joins are not supported in Dialect 1 databases.

Named columns joins have a USING clause which states just the column names. So instead of this:

select * from flotsam f
  join jetsam j
  on f.sea = j.sea
  and f.ship = j.ship;

we can also write:

select * from flotsam
  join jetsam using (sea, ship)

which is considerably shorter. The result set is a little different though — at least when using “SELECT *”:

  • The explicit-condition join — with the ON clause — will contain each of the columns SEA and SHIP twice: once from table FLOTSAM, and once from table JETSAM. Obviously, they will have the same values.

  • The named columns join — with the USING clause — will contain these columns only once.

If you want all the columns in the result set of the named columns join, set up your query like this:

select f.*, j.*
  from flotsam f
  join jetsam j using (sea, ship);

This will give you the exact same result set as the explicit-condition join.

For an OUTER named columns join, there’s an additional twist when using “SELECT *” or an unqualified column name from the USING list:

If a row from one source set doesn’t have a match in the other but must still be included because of the LEFT, RIGHT or FULL directive, the merged column in the joined set gets the non-NULL value. That is fair enough, but now you can’t tell whether this value came from the left set, the right set, or both. This can be especially deceiving when the value came from the right hand set, because “*” always shows combined columns in the left hand part — even in the case of a RIGHT join.

Whether this is a problem or not depends on the situation. If it is, use the “a.*, b.*” approach shown above, with a and b the names or aliases of the two sources. Or better yet, avoid “*” altogether in your serious queries and qualify all column names in joined sets. This has the additional benefit that it forces you to think about which data you want to retrieve and where from.

It is your responsibility to make sure the column names in the USING list are of compatible types between the two sources. If the types are compatible but not equal, the engine converts them to the type with the broadest range of values before comparing the values. This will also be the data type of the merged column that shows up in the result set if “SELECT *” or the unqualified column name is used. Qualified columns on the other hand will always retain their original data type.

If, when joining by named columns, you are using a join column in the WHERE clause, always use the qualified column name, otherwise an index on this column will not be used.

SELECT 1 FROM t1 a JOIN t2 b USING (x) WHERE x = 0;

-- PLAN JOIN (A NATURAL , B INDEX (RDB$2))

However:

SELECT 1 FROM t1 a JOIN t2 b USING (x) WHERE a.x = 0;
-- PLAN JOIN (A INDEX (RDB$1), B INDEX (RDB$2))

SELECT 1 FROM t1 a JOIN t2 b USING (x) WHERE b.x = 0;
-- PLAN JOIN (A INDEX (RDB$1), B INDEX (RDB$2))

The fact is, the unspecified column in this case is implicitly replaced by `COALESCE(a.x, b.x). This clever trick is used to disambiguate column names, but it also interferes with the use of the index.

Natural joins

Taking the idea of the named columns join a step further, a natural join performs an automatic equi-join on all the columns that have the same name in the left and right table. The data types of these columns must be compatible.

Natural joins are not supported in Dialect 1 databases.

Syntax
<natural-join> ::= NATURAL [<join-type>] JOIN <source>

<join-type> ::= INNER | {LEFT | RIGHT | FULL} [OUTER]

Given these two tables:

create table TA (
  a bigint,
  s varchar(12),
  ins_date date
);
create table TB (
  a bigint,
  descr varchar(12),
  x float,
  ins_date date
);

A natural join on TA and TB would involve the columns a and ins_date, and the following two statements would have the same effect:

select * from TA
  natural join TB;
select * from TA
  join TB using (a, ins_date);

Like all joins, natural joins are inner joins by default, but you can turn them into outer joins by specifying LEFT, RIGHT or FULL before the JOIN keyword.

If there are no columns with the same name in the two source relations, a CROSS JOIN is performed. We’ll get to this type of join in a minute.

Cross joins

A cross join produces the full set product of the two data sources. This means that it successfully matches every row in the left source to every row in the right source.

Syntax
<cross-join> ::= {CROSS JOIN | ,} <source>

Use of the comma syntax is discouraged, and we recommend using the explicit join syntax.

Cross-joining two sets is equivalent to joining them on a tautology (a condition that is always true). The following two statements have the same effect:

select * from TA
  cross join TB;
select * from TA
  join TB on 1 = 1;

Cross joins are inner joins, because they only include matching records – it just so happens that every record matches! An outer cross join, if it existed, wouldn’t add anything to the result, because what outer joins add are non-matching records, and these don’t exist in cross joins.

Cross joins are seldom useful, except if you want to list all the possible combinations of two or more variables. Suppose you are selling a product that comes in different sizes, different colors and different materials. If these variables are each listed in a table of their own, this query would return all the combinations:

select m.name, s.size, c.name
  from materials m
  cross join sizes s
  cross join colors c;
Implicit Joins

In the SQL:89 standard, the tables involved in a join were specified as a comma-delimited list in the FROM clause (in other words, a cross join). The join conditions were then specified in the WHERE clause among other search terms. This type of join is called an implicit join.

An example of an implicit join:

/*
 * A sample of all Detroit customers who
 * made a purchase.
 */
SELECT *
FROM customers c, sales s
WHERE s.cust_id = c.id AND c.city = 'Detroit'
Mixing Explicit and Implicit Joins

Mixing explicit and implicit joins is not recommend, but is allowed. However, some types of mixing are not supported by Firebird.

For example, the following query will raise the error “Column does not belong to referenced table”

SELECT *
FROM TA, TB
JOIN TC ON TA.COL1 = TC.COL1
WHERE TA.COL2 = TB.COL2

That is because the explicit join cannot see the TA table. However, the next query will complete without error, since the restriction is not violated.

SELECT *
FROM TA, TB
JOIN TC ON TB.COL1 = TC.COL1
WHERE TA.COL2 = TB.COL2
A Note on Equality

This note about equality and inequality operators applies everywhere in Firebird’s SQL language, not just in JOIN conditions.

The “=” operator, which is explicitly used in many conditional joins and implicitly in named column joins and natural joins, only matches values to values. According to the SQL standard, NULL is not a value and hence two NULLs are neither equal nor unequal to one another. If you need NULLs to match each other in a join, use the IS NOT DISTINCT FROM operator. This operator returns true if the operands have the same value or if they are both NULL.

select *
  from A join B
  on A.id is not distinct from B.code;

Likewise, in the — extremely rare — cases where you want to join on inequality, use IS DISTINCT FROM, not “<>”, if you want NULL to be considered different from any value and two NULLs considered equal:

select *
  from A join B
  on A.id is distinct from B.code;
Ambiguous field names in joins

Firebird rejects unqualified field names in a query if these field names exist in more than one dataset involved in a join. This is even true for inner equi-joins where the field name figures in the ON clause like this:

select a, b, c
  from TA
  join TB on TA.a = TB.a;

There is one exception to this rule: with named columns joins and natural joins, the unqualified field name of a column taking part in the matching process may be used legally and refers to the merged column of the same name. For named columns joins, these are the columns listed in the USING clause. For natural joins, they are the columns that have the same name in both relations. But please notice again that, especially in outer joins, plain colname isn’t always the same as left.colname or right.colname. Types may differ, and one of the qualified columns may be NULL while the other isn’t. In that case, the value in the merged, unqualified column may mask the fact that one of the source values is absent.

6.1.5. The WHERE clause

The WHERE clause serves to limit the rows returned to the ones that the caller is interested in. The condition following the keyword WHERE can be as simple as a check like “AMOUNT = 3” or it can be a multilayered, convoluted expression containing subselects, predicates, function calls, mathematical and logical operators, context variables and more.

The condition in the WHERE clause is often called the search condition, the search expression or simply the search.

In DSQL and ESQL, the search expression may contain parameters. This is useful if a query has to be repeated a number of times with different input values. In the SQL string as it is passed to the server, question marks are used as placeholders for the parameters. They are called positional parameters because they can only be told apart by their position in the string. Connectivity libraries often support named parameters of the form :id, :amount, :a etc. These are more user-friendly; the library takes care of translating the named parameters to positional parameters before passing the statement to the server.

The search condition may also contain local (PSQL) or host (ESQL) variable names, preceded by a colon.

Syntax
SELECT ...
  FROM ...
  [...]
  WHERE <search-condition>
  [...]
Table 72. Argument of WHERE
Parameter Description

search-condition

A Boolean expression returning TRUE, FALSE or possibly UNKNOWN (NULL)

Only those rows for which the search condition evaluates to TRUE are included in the result set. Be careful with possible NULL outcomes: if you negate a NULL expression with NOT, the result will still be NULL and the row will not pass. This is demonstrated in one of the examples below.

Examples
select genus, species from mammals
  where family = 'Felidae'
  order by genus;
select * from persons
  where birthyear in (1880, 1881)
     or birthyear between 1891 and 1898;
select name, street, borough, phone
  from schools s
  where exists (select * from pupils p where p.school = s.id)
  order by borough, street;
select * from employees
  where salary >= 10000 and position <> 'Manager';
select name from wrestlers
  where region = 'Europe'
    and weight > all (select weight from shot_putters
                      where region = 'Africa');
select id, name from players
  where team_id = (select id from teams where name = 'Buffaloes');
select sum (population) from towns
  where name like '%dam'
  and province containing 'land';
select password from usertable
  where username = current_user;

The following example shows what can happen if the search condition evaluates to NULL.

Suppose you have a table listing some children’s names and the number of marbles they possess. At a certain moment, the table contains these data:

CHILD MARBLES

Anita

23

Bob E.

12

Chris

<null>

Deirdre

1

Eve

17

Fritz

0

Gerry

21

Hadassah

<null>

Isaac

6

First, please notice the difference between NULL and 0: Fritz is known to have no marbles at all, Chris’s and Hadassah’s marble counts are unknown.

Now, if you issue this SQL statement:

select list(child) from marbletable where marbles > 10;

you will get the names Anita, Bob E., Eve and Gerry. These children all have more than 10 marbles.

If you negate the expression:

select list(child) from marbletable where not marbles > 10

it’s the turn of Deirdre, Fritz and Isaac to fill the list. Chris and Hadassah are not included, because they aren’t known to have ten marbles or less. Should you change that last query to:

select list(child) from marbletable where marbles <= 10;

the result will still be the same, because the expression NULL <= 10 yields UNKNOWN. This is not the same as TRUE, so Chris and Hadassah are not listed. If you want them listed with the “poor” children, change the query to:

select list(child) from marbletable
where marbles <= 10 or marbles is null;

Now the search condition becomes true for Chris and Hadassah, because “marbles is null” obviously returns TRUE in their case. In fact, the search condition cannot be NULL for anybody now.

Lastly, two examples of SELECT queries with parameters in the search. It depends on the application how you should define query parameters and even if it is possible at all. Notice that queries like these cannot be executed immediately: they have to be prepared first. Once a parameterized query has been prepared, the user (or calling code) can supply values for the parameters and have it executed many times, entering new values before every call. How the values are entered and the execution started is up to the application. In a GUI environment, the user typically types the parameter values in one or more text boxes and then clicks an “Execute”, “Run” or “Refresh” button.

select name, address, phone frome stores
  where city = ? and class = ?;
select * from pants
  where model = :model and size = :size and color = :col;

The last query cannot be passed directly to the engine; the application must convert it to the other format first, mapping named parameters to positional parameters.

6.1.6. The GROUP BY clause

GROUP BY merges output rows that have the same combination of values in its item list into a single row. Aggregate functions in the select list are applied to each group individually instead of to the dataset as a whole.

If the select list only contains aggregate columns or, more generally, columns whose values don’t depend on individual rows in the underlying set, GROUP BY is optional. When omitted, the final result set of will consist of a single row (provided that at least one aggregated column is present).

If the select list contains both aggregate columns and columns whose values may vary per row, the GROUP BY clause becomes mandatory.

Syntax
SELECT ... FROM ...
  GROUP BY <grouping-item> [, <grouping-item> ...]
  [HAVING <grouped-row-condition>]
  ...

<grouping-item> ::=
    <non-aggr-select-item>
  | <non-aggr-expression>

<non-aggr-select-item> ::=
    column-copy
  | column-alias
  | column-position
Table 73. Arguments for the GROUP BY Clause
Argument Description

non-aggr-expression

Any non-aggregating expression that is not included in the SELECT list, i.e. unselected columns from the source set or expressions that do not depend on the data in the set at all

column-copy

A literal copy, from the SELECT list, of an expression that contains no aggregate function

column-alias

The alias, from the SELECT list, of an expression (column) that contains no aggregate function

column-position

The position number, in the SELECT list, of an expression (column) that contains no aggregate function

A general rule of thumb is that every non-aggregate item in the SELECT list must also be in the GROUP BY list. You can do this in three ways:

  1. By copying the item verbatim from the select list, e.g. “class” or “'D:' || upper(doccode)”.

  2. By specifying the column alias, if it exists.

  3. By specifying the column position as an integer literal between 1 and the number of columns. Integer values resulting from expressions or parameter substitutions are simply invariables and will be used as such in the grouping. They will have no effect though, as their value is the same for each row.

If you group by a column position, the expression at that position is copied internally from the select list. If it concerns a subquery, that subquery will be executed again in the grouping phase. That is to say, grouping by the column position, rather than duplicating the subquery expression in the grouping clause, saves keystrokes and bytes, but it is not a way of saving processing cycles!

In addition to the required items, the grouping list may also contain:

  • Columns from the source table that are not in the select list, or non-aggregate expressions based on such columns. Adding such columns may further subdivide the groups. However, since these columns are not in the select list, you can’t tell which aggregated row corresponds to which value in the column. So, in general, if you are interested in this information, you also include the column or expression in the select list — which brings you back to the rule: “every non-aggregate column in the select list must also be in the grouping list”.

  • Expressions that aren’t dependent on the data in the underlying set, e.g. constants, context variables, single-value non-correlated subselects etc. This is only mentioned for completeness, as adding such items is utterly pointless: they don’t affect the grouping at all. “Harmless but useless” items like these may also figure in the select list without being copied to the grouping list.

Examples

When the select list contains only aggregate columns, GROUP BY is not mandatory:

select count(*), avg(age) from students
  where sex = 'M';

This will return a single row listing the number of male students and their average age. Adding expressions that don’t depend on values in individual rows of table STUDENTS doesn’t change that:

select count(*), avg(age), current_date from students
  where sex = 'M';

The row will now have an extra column showing the current date, but other than that, nothing fundamental has changed. A GROUP BY clause is still not required.

However, in both the above examples it is allowed. This is perfectly valid:

select count(*), avg(age) from students
  where sex = 'M'
  group by class;

This will return a row for each class that has boys in it, listing the number of boys and their average age in that particular class. (If you also leave the current_date field in, this value will be repeated on every row, which is not very exciting.)

The above query has a major drawback though: it gives you information about the different classes, but it doesn’t tell you which row applies to which class. In order to get that extra bit of information, the non-aggregate column CLASS must be added to the select list:

select class, count(*), avg(age) from students
  where sex = 'M'
  group by class;

Now we have a useful query. Notice that the addition of column CLASS also makes the GROUP BY clause mandatory. We can’t drop that clause anymore, unless we also remove CLASS from the column list.

The output of our last query may look something like this:

CLASS COUNT AVG

2A

12

13.5

2B

9

13.9

3A

11

14.6

3B

12

14.4

…​

…​

…​

The headings “COUNT” and “AVG” are not very informative. In a simple case like this, you might get away with that, but in general you should give aggregate columns a meaningful name by aliasing them:

select class,
       count(*) as num_boys,
       avg(age) as boys_avg_age
  from students
  where sex = 'M'
  group by class;

As you may recall from the formal syntax of the columns list, the AS keyword is optional.

Adding more non-aggregate (or rather: row-dependent) columns requires adding them to the GROUP BY clause too. For instance, you might want to see the above information for girls as well; and you may also want to differentiate between boarding and day students:

select class,
       sex,
       boarding_type,
       count(*) as number,
       avg(age) as avg_age
  from students
  group by class, sex, boarding_type;

This may give you the following result:

CLASS SEX BOARDING_TYPE NUMBER AVG_AGE

2A

F

BOARDING

9

13.3

2A

F

DAY

6

13.5

2A

M

BOARDING

7

13.6

2A

M

DAY

5

13.4

2B

F

BOARDING

11

13.7

2B

F

DAY

5

13.7

2B

M

BOARDING

6

13.8

…​

…​

…​

…​

…​

Each row in the result set corresponds to one particular combination of the columns CLASS, SEX and BOARDING_TYPE. The aggregate results — number and average age — are given for each of these rather specific groups individually. In a query like this, you don’t see a total for boys as a whole, or day students as a whole. That’s the tradeoff: the more non-aggregate columns you add, the more you can pinpoint very specific groups, but the more you also lose sight of the general picture. Of course, you can still obtain the “coarser” aggregates through separate queries.

HAVING

Just as a WHERE clause limits the rows in a dataset to those that meet the search condition, so the HAVING sub-clause imposes restrictions on the aggregated rows in a grouped set. HAVING is optional, and can only be used in conjunction with GROUP BY.

The condition(s) in the HAVING clause can refer to:

  • Any aggregated column in the select list. This is the most widely used case.

  • Any aggregated expression that is not in the select list, but allowed in the context of the query. This is sometimes useful too.

  • Any column in the GROUP BY list. While legal, it is more efficient to filter on these non-aggregated data at an earlier stage: in the WHERE clause.

  • Any expression whose value doesn’t depend on the contents of the dataset (like a constant or a context variable). This is valid but utterly pointless, because it will either suppress the entire set or leave it untouched, based on conditions that have nothing to do with the set itself.

A HAVING clause can not contain:

  • Non-aggregated column expressions that are not in the GROUP BY list.

  • Column positions. An integer in the HAVING clause is just an integer.

  • Column aliases –- not even if they appear in the GROUP BY clause!

Examples

Building on our earlier examples, this could be used to skip small groups of students:

select class,
       count(*) as num_boys,
       avg(age) as boys_avg_age
  from students
  where sex = 'M'
  group by class
  having count(*) >= 5;

To select only groups that have a minimum age spread:

select class,
       count(*) as num_boys,
       avg(age) as boys_avg_age
  from students
  where sex = 'M'
  group by class
  having max(age) - min(age) > 1.2;

Notice that if you’re really interested in this information, you’d normally include min(age) and max(age) -– or the expression “max(age) - min(age)” –- in the select list as well!

To include only 3rd classes:

select class,
       count(*) as num_boys,
       avg(age) as boys_avg_age
  from students
  where sex = 'M'
  group by class
  having class starting with '3';

Better would be to move this condition to the WHERE clause:

select class,
       count(*) as num_boys,
       avg(age) as boys_avg_age
  from students
  where sex = 'M' and class starting with '3'
  group by class;

6.1.7. The PLAN clause

The PLAN clause enables the user to submit a data retrieval plan, thus overriding the plan that the optimizer would have generated automatically.

Syntax
PLAN <plan-expr>

<plan-expr> ::=
    (<plan-item> [, <plan-item> ...])
  | <sorted-item>
  | <joined-item>
  | <merged-item>
  | <hash-item>

<sorted-item> ::= SORT (<plan-item>)

<joined-item> ::=
  JOIN (<plan-item>, <plan-item> [, <plan-item> ...])

<merged-item> ::=
  [SORT] MERGE (<sorted-item>, <sorted-item> [, <sorted-item> ...])

<hash-item> ::=
  HASH (<plan-item>, <plan-item> [, <plan-item> ...])

<plan-item> ::= <basic-item> | <plan-expr>

<basic-item> ::=
  <relation> { NATURAL
             | INDEX (<indexlist>)
             | ORDER index [INDEX (<indexlist>)] }

<relation> ::= table | view [table]

<indexlist> ::= index [, index ...]
Table 74. Arguments for the PLAN Clause
Argument Description

table

Table name or its alias

view

View name

index

Index name

Every time a user submits a query to the Firebird engine, the optimizer computes a data retrieval strategy. Most Firebird clients can make this retrieval plan visible to the user. In Firebird’s own isql utility, this is done with the command SET PLAN ON. If you are studying query plans rather than running queries, SET PLANONLY ON will show the plan without executing the query. Use SET PLANONLY OFF to execute the query and show the plan.

A more detailed plan can be obtained when you enable an advanced plan. In isql this can be done with SET EXPLAIN ON. The advanced plan displayes more detailed information about the access methods used by the optimizer, however it cannot be included in the PLAN clause of a statement. The description of the advanced plan is beyond the scope of this Language Reference.

In most situations, you can trust that Firebird will select the optimal query plan for you. However, if you have complicated queries that seem to be underperforming, it may very well be worth your while to examine the plan and see if you can improve on it.

Simple plans

The simplest plans consist of just a relation name followed by a retrieval method. For example, for an unsorted single-table select without a WHERE clause:

select * from students
  plan (students natural);

Advanced plan:

Select Expression
  -> Table "STUDENTS" Full Scan

If there’s a WHERE or a HAVING clause, you can specify the index to be used for finding matches:

select * from students
  where class = '3C'
  plan (students index (ix_stud_class));

Advanced plan:

Select Expression
  -> Filter
    -> Table "STUDENTS" Access By ID
      -> Bitmap
        -> Index "IX_STUD_CLASS" Range Scan (full match)

The INDEX directive is also used for join conditions (to be discussed a little later). It can contain a list of indexes, separated by commas.

ORDER specifies the index for sorting the set if an ORDER BY or GROUP BY clause is present:

select * from students
  plan (students order pk_students)
  order by id;

Advanced plan:

Select Expression
  -> Table "STUDENTS" Access By ID
    -> Index "PK_STUDENTS" Full Scan

ORDER and INDEX can be combined:

select * from students
  where class >= '3'
  plan (students order pk_students index (ix_stud_class))
  order by id;

Advanced plan:

Select Expression
  -> Filter
    -> Table "STUDENTS" Access By ID
      -> Index "PK_STUDENTS" Full Scan
        -> Bitmap
          -> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)

It is perfectly OK if ORDER and INDEX specify the same index:

select * from students
  where class >= '3'
  plan (students order ix_stud_class index (ix_stud_class))
  order by class;

Advanced plan:

Select Expression
  -> Filter
    -> Table "STUDENTS" Access By ID
      -> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)
        -> Bitmap
          -> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)

For sorting sets when there’s no usable index available (or if you want to suppress its use), leave out ORDER and prepend the plan expression with SORT:

select * from students
  plan sort (students natural)
  order by name;

Advanced plan:

Select Expression
  -> Sort (record length: 128, key length: 56)
    -> Table "STUDENTS" Full Scan

Or when an index is used for the search:

select * from students
  where class >= '3'
  plan sort (students index (ix_stud_class))
  order by name;

Advanced plan:

elect Expression
  -> Sort (record length: 136, key length: 56)
    -> Filter
      -> Table "STUDENTS" Access By ID
        -> Bitmap
          -> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)

Notice that SORT, unlike ORDER, is outside the parentheses. This reflects the fact that the data rows are retrieved unordered and sorted afterwards by the engine.

When selecting from a view, specify the view and the table involved. For instance, if you have a view FRESHMEN that selects just the first-year students:

select * from freshmen
  plan (freshmen students natural);

Advanced plan:

Select Expression
  -> Table "STUDENTS" as "FRESHMEN" Full Scan

Or, for instance:

select * from freshmen
  where id > 10
  plan sort (freshmen students index (pk_students))
  order by name desc;

Advanced plan:

Select Expression
  -> Sort (record length: 144, key length: 24)
    -> Filter
      -> Table "STUDENTS" as "FRESHMEN" Access By ID
        -> Bitmap
          -> Index "PK_STUDENTS" Range Scan (lower bound: 1/1)

If a table or view has been aliased, it is the alias, not the original name, that must be used in the PLAN clause.

Composite plans

When a join is made, you can specify the index which is to be used for matching. You must also use the JOIN directive on the two streams in the plan:

select s.id, s.name, s.class, c.mentor
  from students s
  join classes c on c.name = s.class
  plan join (s natural, c index (pk_classes));

Advanced plan:

Select Expression
  -> Nested Loop Join (inner)
    -> Table "STUDENTS" as "S" Full Scan
    -> Filter
      -> Table "CLASSES" as "C" Access By ID
        -> Bitmap
          -> Index "PK_CLASSES" Unique Scan

The same join, sorted on an indexed column:

select s.id, s.name, s.class, c.mentor
  from students s
  join classes c on c.name = s.class
  plan join (s order pk_students, c index (pk_classes))
  order by s.id;

Advanced plan:

Select Expression
  -> Nested Loop Join (inner)
    -> Table "STUDENTS" as "S" Access By ID
      -> Index "PK_STUDENTS" Full Scan
    -> Filter
      -> Table "CLASSES" as "C" Access By ID
        -> Bitmap
          -> Index "PK_CLASSES" Unique Scan

And on a non-indexed column:

select s.id, s.name, s.class, c.mentor
  from students s
  join classes c on c.name = s.class
  plan sort (join (s natural, c index (pk_classes)))
  order by s.name;

Advanced plan:

Select Expression
  -> Sort (record length: 152, key length: 12)
    -> Nested Loop Join (inner)
      -> Table "STUDENTS" as "S" Full Scan
      -> Filter
        -> Table "CLASSES" as "C" Access By ID
          -> Bitmap
            -> Index "PK_CLASSES" Unique Scan

With a search condition added:

select s.id, s.name, s.class, c.mentor
  from students s
  join classes c on c.name = s.class
  where s.class <= '2'
  plan sort (join (s index (fk_student_class), c index (pk_classes)))
  order by s.name;

Advanced plan:

Select Expression
  -> Sort (record length: 152, key length: 12)
    -> Nested Loop Join (inner)
      -> Filter
        -> Table "STUDENTS" as "S" Access By ID
          -> Bitmap
            -> Index "FK_STUDENT_CLASS" Range Scan (lower bound: 1/1)
      -> Filter
        -> Table "CLASSES" as "C" Access By ID
          -> Bitmap
            -> Index "PK_CLASSES" Unique Scan

As a left outer join:

select s.id, s.name, s.class, c.mentor
  from classes c
  left join students s on c.name = s.class
  where s.class <= '2'
  plan sort (join (c natural, s index (fk_student_class)))
  order by s.name;

Advanced plan:

Select Expression
  -> Sort (record length: 192, key length: 56)
    -> Filter
      -> Nested Loop Join (outer)
        -> Table "CLASSES" as "C" Full Scan
        -> Filter
          -> Table "STUDENTS" as "S" Access By ID
            -> Bitmap
              -> Index "FK_STUDENT_CLASS" Range Scan (full match)

If there are no indices available to match the join condition (or if you don’t want to use it), then it is possible connect the streams using HASH or MERGE method.

To connect using the HASH method in the plan, the HASH directive is used instead of the JOIN directive. In this case, the smaller (secondary) stream is materialized completely into an internal buffer. While reading this secondary stream, a hash function is applied and a pair {hash, pointer to buffer} is written to a hash table. Then the primary stream is read and its hash key is tested against the hash table.

select *
  from students s
  join classes c on c.cookie = s.cookie
  plan hash (c natural, s natural)

Advanced plan:

Select Expression
  -> Filter
    -> Hash Join (inner)
      -> Table "STUDENTS" as "S" Full Scan
      -> Record Buffer (record length: 145)
        -> Table "CLASSES" as "C" Full Scan

For a MERGE join, the plan must first sort both streams on their join column(s) and then merge. This is achieved with the SORT directive (which we’ve already seen) and MERGE instead of JOIN:

select * from students s
  join classes c on c.cookie = s.cookie
  plan merge (sort (c natural), sort (s natural));

Adding an ORDER BY clause means the result of the merge must also be sorted:

select * from students s
  join classes c on c.cookie = s.cookie
  plan sort (merge (sort (c natural), sort (s natural)))
  order by c.name, s.id;

Finally, we add a search condition on two indexable colums of table STUDENTS:

select * from students s
  join classes c on c.cookie = s.cookie
  where s.id < 10 and s.class <= '2'
  plan sort (merge (sort (c natural),
                    sort (s index (pk_students, fk_student_class))))
  order by c.name, s.id;

As follows from the formal syntax definition, JOINs and MERGEs in the plan may combine more than two streams. Also, every plan expression may be used as a plan item in an encompassing plan. This means that plans of certain complicated queries may have various nesting levels.

Finally, instead of MERGE you may also write SORT MERGE. As this makes absolutely no difference and may create confusion with “real” SORT directives (the ones that do make a difference), it’s probably best to stick to plain MERGE.

In addition to the plan for the main query, you can specify a plan for each subquery. For example, the following query with multiple plans will work:

select *
from color
where exists (
  select *
  from hors
  where horse.code_color = color.code_color
  plan (horse index (fk_horse_color)))
plan (color natural)

Occasionally, the optimizer will accept a plan and then not follow it, even though it does not reject it as invalid. One such example was

MERGE (unsorted stream, unsorted stream)

It is advisable to treat such as plan as “deprecated”.

6.1.8. UNION

The UNION clause concatenates two or more datasets, thus increasing the number of rows but not the number of columns. Datasets taking part in a UNION must have the same number of columns, and columns at corresponding positions must be of the same type. Other than that, they may be totally unrelated.

By default, a union suppresses duplicate rows. UNION ALL shows all rows, including any duplicates. The optional DISTINCT keyword makes the default behaviour explicit.

Syntax
<union> ::=
  <individual-select>
  UNION [{DISTINCT | ALL}]
  <individual-select>
  [
    [UNION [{DISTINCT | ALL}]
    <individual-select>
    ...
  ]
  [<union-wide-clauses>]

<individual-select> ::=
  SELECT
  [TRANSACTION name]
  [FIRST m] [SKIP n]
  [{DISTINCT | ALL}] <columns>
  [INTO <host-varlist>]
  FROM <source> [[AS] alias]
  [<joins>]
  [WHERE <condition>]
  [GROUP BY <grouping-list>
  [HAVING <aggregate-condition>]]
  [PLAN <plan-expr>]

<union-wide-clauses> ::=
  [ORDER BY <ordering-list>]
  [{ ROWS <m> [TO <n>]
   | [OFFSET n {ROW | ROWS}]
     [FETCH {FIRST | NEXT} [m] {ROW | ROWS} ONLY]
  }]
  [FOR UPDATE [OF <columns>]]
  [WITH LOCK]
  [INTO <PSQL-varlist>]

Unions take their column names from the first select query. If you want to alias union columns, do so in the column list of the topmost SELECT. Aliases in other participating selects are allowed and may even be useful, but will not propagate to the union level.

If a union has an ORDER BY clause, the only allowed sort items are integer literals indicating 1-based column positions, optionally followed by an ASC/DESC and/or a NULLS {FIRST | LAST} directive. This also implies that you cannot order a union by anything that isn’t a column in the union. (You can, however, wrap it in a derived table, which gives you back all the usual sort options.)

Unions are allowed in subqueries of any kind and can themselves contain subqueries. They can also contain joins, and can take part in a join when wrapped in a derived table.

Examples

This query presents information from different music collections in one dataset using unions:

select id, title, artist, length, 'CD' as medium
  from cds
union
select id, title, artist, length, 'LP'
  from records
union
select id, title, artist, length, 'MC'
  from cassettes
order by 3, 2  -- artist, title;

If id, title, artist and length are the only fields in the tables involved, the query can also be written as:

select c.*, 'CD' as medium
  from cds c
union
select r.*, 'LP'
  from records r
union
select c.*, 'MC'
  from cassettes c
order by 3, 2  -- artist, title;

Qualifying the “stars” is necessary here because they are not the only item in the column list. Notice how the “c” aliases in the first and third select do not conflict with each other: their scopes are not union-wide but apply only to their respective select queries.

The next query retrieves names and phone numbers from translators and proofreaders. Translators who also work as proofreaders will show up only once in the result set, provided their phone number is the same in both tables. The same result can be obtained without DISTINCT. With ALL, these people would appear twice.

select name, phone from translators
  union distinct
select name, telephone from proofreaders;

A UNION within a subquery:

select name, phone, hourly_rate from clowns
where hourly_rate < all
  (select hourly_rate from jugglers
     union
   select hourly_rate from acrobats)
order by hourly_rate;

6.1.9. ORDER BY

When a SELECT statement is executed, the result set is not sorted in any way. It often happens that rows appear to be sorted chronologically, simply because they are returned in the same order they were added to the table by INSERT statements. This is not something you should rely on: the order may change depending on the plan or updates to rows, etc. To specify an explicit sorting order for the set specification, an ORDER BY clause is used.

Syntax
SELECT ... FROM ...
...
ORDER BY <ordering-item> [, <ordering-item> …]

<ordering-item> ::=
  {col-name | col-alias | col-position | <expression>}
  [COLLATE collation-name]
  [ASC[ENDING] | DESC[ENDING]]
  [NULLS {FIRST|LAST}]
Table 75. Arguments for the ORDER BY Clause
Argument Description

col-name

Full column name

col-alias

Column alias

col-position

Column position in the SELECT list

expression

Any expression

collation-name

Collation name (sorting order for string types)

The ORDER BY consists of a comma-separated list of the columns on which the result data set should be sorted. The sort order can be specified by the name of the column — but only if the column was not previously aliased in the SELECT columns list. The alias must be used if it was used in the select list. The ordinal position number of the column in the SELECT column list, the alias given to the column in the SELECT list with the help of the AS keyword, or the number of the column in the SELECT list can be used without restriction.

The three forms of expressing the columns for the sort order can be mixed in the same ORDER BY clause. For instance, one column in the list can be specified by its name and another column can be specified by its number.

If you sort by column position or alias, then the expression corresponding to this position (alias) will be copied from the SELECT list. This also applies to subqueries, thus, the subquery will be executed at least twice.

If you use the column position to specify the sort order for a query of the SELECT * style, the server expands the asterisk to the full column list in order to determine the columns for the sort. It is, however, considered “sloppy practice” to design ordered sets this way.

Sorting Direction

The keyword ASCENDING — usually abbreviated to ASC — specifies a sort direction from lowest to highest. ASCENDING is the default sort direction.

The keyword DESCENDING — usually abbreviated to DESC — specifies a sort direction from highest to lowest.

Specifying ascending order for one column and descending order for another is allowed.

Collation Order

The keyword COLLATE specifies the collation order for a string column if you need a collation that is different from the normal one for this column. The normal collation order will be either the default one for the database character set, or the one set explicitly in the column’s definition.

NULLs Position

The keyword NULLS defines where NULL in the associated column will fall in the sort order: NULLS FIRST places the rows with the NULL column above rows ordered by that column’s value; NULLS LAST places those rows after the ordered rows.

NULLS FIRST is the default.

Ordering UNION-ed Sets

The discrete queries contributing to a UNION cannot take an ORDER BY clause. The only option is to order the entire output, using one ORDER BY clause at the end of the overall query.

The simplest — and, in some cases, the only — method for specifying the sort order is by the ordinal column position. However, it is also valid to use the column names or aliases, from the first contributing query only.

The ASC/DESC and/or NULLS directives are available for this global set.

If discrete ordering within the contributing set is required, use of derived tables or common table expressions for those sets may be a solution.

Examples of ORDER BY

Sorting the result set in ascending order, ordering by the RDB$CHARACTER_SET_ID and RDB$COLLATION_ID columns of the RDB$COLLATIONS table:

SELECT
  RDB$CHARACTER_SET_ID AS CHARSET_ID,
  RDB$COLLATION_ID AS COLL_ID,
  RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY RDB$CHARACTER_SET_ID, RDB$COLLATION_ID;

The same, but sorting by the column aliases:

SELECT
  RDB$CHARACTER_SET_ID AS CHARSET_ID,
  RDB$COLLATION_ID AS COLL_ID,
  RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY CHARSET_ID, COLL_ID;

Sorting the output data by the column position numbers:

SELECT
  RDB$CHARACTER_SET_ID AS CHARSET_ID,
  RDB$COLLATION_ID AS COLL_ID,
  RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY 1, 2;

Sorting a SELECT * query by position numbers — possible, but nasty and not recommended:

SELECT *
FROM RDB$COLLATIONS
ORDER BY 3, 2;

Sorting by the second column in the BOOKS table, or — if BOOKS has only one column — the FILMS.DIRECTOR column:

SELECT
    BOOKS.*,
    FILMS.DIRECTOR
FROM BOOKS, FILMS
ORDER BY 2;

Sorting in descending order by the values of column PROCESS_TIME, with NULLs placed at the beginning of the set:

SELECT *
FROM MSG
ORDER BY PROCESS_TIME DESC NULLS FIRST;

Sorting the set obtained by a UNION of two queries. Results are sorted in descending order for the values in the second column, with NULLs at the end of the set; and in ascending order for the values of the first column with NULLs at the beginning.

SELECT
  DOC_NUMBER, DOC_DATE
FROM PAYORDER
UNION ALL
SELECT
  DOC_NUMBER, DOC_DATE
FROM BUDGORDER
ORDER BY 2 DESC NULLS LAST, 1 ASC NULLS FIRST;

6.1.10. ROWS

Used for

Retrieving a slice of rows from an ordered set

Available in

DSQL, PSQL

Syntax
SELECT <columns> FROM ...
  [WHERE ...]
  [ORDER BY ...]
  ROWS m [TO n]
Table 76. Arguments for the ROWS Clause
Argument Description

m, n

Any integer expressions

ROWS is non-standard syntax

ROWS is a Firebird-specific clause. Use the SQL-standard OFFSET, FETCH syntax wherever possible.

Limits the amount of rows returned by the SELECT statement to a specified number or range.

The ROWS clause also does the same job as the FIRST and SKIP clauses, but neither are SQL-compliant. Unlike FIRST and SKIP, and OFFSET and FETCH, the ROWS and TO clauses accept any type of integer expression as their arguments, without parentheses. Of course, parentheses may still be needed for nested evaluations inside the expression, and a subquery must always be enclosed in parentheses.

  • Numbering of rows in the intermediate set — the overall set cached on disk before the “slice” is extracted — starts at 1.

  • OFFSET/FETCH, FIRST/SKIP, and ROWS can all be used without the ORDER BY clause, although it rarely makes sense to do so — except perhaps when you want to take a quick look at the table data and don’t care that rows will be in a non-deterministic order. For this purpose, a query like “SELECT * FROM TABLE1 ROWS 20” would return the first 20 rows instead of a whole table that might be rather big.

Calling ROWS m retrieves the first m records from the set specified.

Characteristics of using ROWS m without a TO clause:
  • If m is greater than the total number of records in the intermediate data set, the entire set is returned

  • If m = 0, an empty set is returned

  • If m < 0, the SELECT statement call fails with an error

Calling ROWS m TO n retrieves the rows from the set, starting at row m and ending after row n — the set is inclusive.

Characteristics of using ROWS m with a TO clause:
  • If m is greater than the total number of rows in the intermediate set and n >= m, an empty set is returned

  • If m is not greater than n and n is greater than the total number of rows in the intermediate set, the result set will be limited to rows starting from m, up to the end of the set

  • If m < 1 and n < 1, the SELECT statement call fails with an error

  • If n = m - 1, an empty set is returned

  • If n < m - 1, the SELECT statement call fails with an error

Using a TO clause without a ROWS clause:

While ROWS replaces the FIRST and SKIP syntax, there is one situation where the ROWS syntax does not provide the same behaviour: specifying SKIP n on its own returns the entire intermediate set, without the first n rows. The ROWS …​ TO syntax needs a little help to achieve this.

With the ROWS syntax, you need a ROWS clause in association with the TO clause and deliberately make the second (n) argument greater than the size of the intermediate data set. This is achieved by creating an expression for n that uses a subquery to retrieve the count of rows in the intermediate set and adds 1 to it.

Replacing of FIRST/SKIP and OFFSET/FETCH

The ROWS clause can be used instead of the SQL-standard OFFSET/FETCH or non-standard FIRST/SKIP clauses, except the case where only OFFSET or SKIP is used, that is when the whole result set is returned except for skipping the specified number of rows from the beginning.

In order to implement this behaviour using ROWS, you must specify the TO clause with a value larger than the size of the returned result set.

Mixing ROWS and FIRST/SKIP or OFFSET/FETCH

ROWS syntax cannot be mixed with FIRST/SKIP or OFFSET/FETCH in the same SELECT expression. Using the different syntaxes in different subqueries in the same statement is allowed.

ROWS Syntax in UNION Queries

When ROWS is used in a UNION query, the ROWS directive is applied to the unioned set and must be placed after the last SELECT statement.

If a need arises to limit the subsets returned by one or more SELECT statements inside UNION, there are a couple of options:

  1. Use FIRST/SKIP syntax in these SELECT statements — bearing in mind that an ordering clause (ORDER BY) cannot be applied locally to the discrete queries, but only to the combined output.

  2. Convert the queries to derived tables with their own ROWS clauses.

Examples of ROWS

The following examples rewrite the examples used in the section about FIRST and SKIP, earlier in this chapter.

Retrieve the first ten names from the output of a sorted query on the PEOPLE table:

SELECT id, name
FROM People
ORDER BY name ASC
ROWS 1 TO 10;

or its equivalent

SELECT id, name
FROM People
ORDER BY name ASC
ROWS 10;

Return all records from the PEOPLE table except for the first 10 names:

SELECT id, name
FROM People
ORDER BY name ASC
ROWS 11 TO (SELECT COUNT(*) FROM People);

And this query will return the last 10 records (pay attention to the parentheses):

SELECT id, name
FROM People
ORDER BY name ASC
ROWS (SELECT COUNT(*) - 9 FROM People)
TO (SELECT COUNT(*) FROM People);

This one will return rows 81-100 from the PEOPLE table:

SELECT id, name
FROM People
ORDER BY name ASC
ROWS 81 TO 100;

ROWS can also be used with the UPDATE and DELETE statements.

6.1.11. OFFSET, FETCH

Used for

Retrieving a slice of rows from an ordered set

Available in

DSQL, PSQL

Syntax
SELECT <columns> FROM ...
  [WHERE ...]
  [ORDER BY ...]
  [OFFSET <m> {ROW | ROWS}]
  [FETCH {FIRST | NEXT} [ <n> ] { ROW | ROWS } ONLY]

<m>, <n>  ::=
    <integer-literal>
  | <query-parameter>
Table 77. Arguments for the OFFSET and FETCH Clause
Argument Description

integer-literal

Integer literal

query-parameter

Query parameter place-holder. ? in DSQL and :paramname in PSQL

The OFFSET and FETCH clauses are an SQL:2008 compliant equivalent for FIRST/SKIP, and an alternative for ROWS. The OFFSET clause specifies the number of rows to skip. The FETCH clause specifies the number of rows to fetch.

When <n> is left out of the FETCH clause (eg FETCH FIRST ROW ONLY), one row will be fetched.

The choice between ROW or ROWS, or FIRST or NEXT in the clauses is just for aesthetic purposes (eg making the query more readable or grammatically correct). Technically there is no difference between OFFSET 10 ROW or OFFSET 10 ROWS, or FETCH NEXT 10 ROWS ONLY or FETCH FIRST 10 ROWS ONLY.

As with SKIP and FIRST, OFFSET and FETCH clauses can be applied independently, in both top-level and nested query expressions.

  1. Firebird doesn’t support the percentage FETCH defined in the SQL standard.

  2. Firebird doesn’t support the FETCH …​ WITH TIES defined in the SQL standard.

  3. The FIRST/SKIP and ROWS clause are non-standard alternatives.

  4. The OFFSET and/or FETCH clauses cannot be combined with ROWS or FIRST/SKIP on the same query expression.

  5. Expressions, column references, etc are not allowed within either clause.

  6. Contrary to the ROWS clause, OFFSET and FETCH are only available on SELECT statements.

Examples of OFFSET and FETCH
Return all rows except the first 10, ordered by column COL1
SELECT *
FROM T1
ORDER BY COL1
OFFSET 10 ROWS
Return the first 10 rows, ordered by column COL1
SELECT *
FROM T1
ORDER BY COL1
FETCH FIRST 10 ROWS ONLY
Using OFFSET and FETCH clauses in a derived table and in the outer query
SELECT *
FROM (
  SELECT *
  FROM T1
  ORDER BY COL1 DESC
  OFFSET 1 ROW
  FETCH NEXT 10 ROWS ONLY
) a
ORDER BY a.COL1
FETCH FIRST ROW ONLY

The following examples rewrite the FIRST/SKIP examples and ROWS examples earlier in this chapter.

Retrieve the first ten names from the output of a sorted query on the PEOPLE table:

SELECT id, name
FROM People
ORDER BY name ASC
FETCH NEXT 10 ROWS ONLY;

Return all records from the PEOPLE table except for the first 10 names:

SELECT id, name
FROM People
ORDER BY name ASC
OFFSET 10 ROWS;

And this query will return the last 10 records. Contrary to FIRST/SKIP and ROWS we cannot use expressions (including sub-queries). To retrieve the last 10 rows, reverse the sort to the first (last) 10 rows, and then sort in the right order.

SELECT id, name
FROM (
  SELECT id, name
  FROM People
  ORDER BY name DESC
  FETCH FIRST 10 ROWS ONLY
) a
ORDER BY name ASC;

This one will return rows 81-100 from the PEOPLE table:

SELECT id, name
FROM People
ORDER BY name ASC
OFFSET 80 ROWS
FETCH NEXT 20 ROWS;
See also

FIRST, SKIP, ROWS

6.1.12. FOR UPDATE [OF]

Syntax
SELECT ... FROM single_table
  [WHERE ...]
  [FOR UPDATE [OF <column_list>]]

FOR UPDATE does not do what its name suggests. It’s only effect currently is to disable the pre-fetch buffer.

It is likely to change in future: the plan is to validate cursors marked with FOR UPDATE if they are truly updateable and reject positioned updates and deletes for cursors evaluated as non-updateable.

The OF sub-clause does not do anything at all.

6.1.13. WITH LOCK

Used for

Limited pessimistic locking

Available in

DSQL, PSQL

Syntax
SELECT ... FROM single_table
  [WHERE ...]
  [FOR UPDATE [OF <column_list>]]
  WITH LOCK

WITH LOCK provides a limited explicit pessimistic locking capability for cautious use in conditions where the affected row set is:

  1. extremely small (ideally singleton), and

  2. precisely controlled by the application code.

This is for experts only!

The need for a pessimistic lock in Firebird is very rare indeed and should be well understood before use of this extension is considered.

It is essential to understand the effects of transaction isolation and other transaction attributes before attempting to implement explicit locking in your application.

If the WITH LOCK clause succeeds, it will secure a lock on the selected rows and prevent any other transaction from obtaining write access to any of those rows, or their dependants, until your transaction ends.

WITH LOCK can only be used with a top-level, single-table SELECT statement. It is not available:

  • in a subquery specification

  • for joined sets

  • with the DISTINCT operator, a GROUP BY clause or any other aggregating operation

  • with a view

  • with the output of a selectable stored procedure

  • with an external table

  • with a UNION query

As the engine considers, in turn, each record falling under an explicit lock statement, it returns either the record version that is the most currently committed, regardless of database state when the statement was submitted, or an exception.

Wait behaviour and conflict reporting depend on the transaction parameters specified in the TPB block:

Table 78. How TPB settings affect explicit locking
TPB mode Behaviour

isc_tpb_consistency

Explicit locks are overridden by implicit or explicit table-level locks and are ignored.

isc_tpb_concurrency + isc_tpb_nowait

If a record is modified by any transaction that was committed since the transaction attempting to get explicit lock started, or an active transaction has performed a modification of this record, an update conflict exception is raised immediately.

isc_tpb_concurrency + isc_tpb_wait

If the record is modified by any transaction that has committed since the transaction attempting to get explicit lock started, an update conflict exception is raised immediately.

If an active transaction is holding ownership on this record (via explicit locking or by a normal optimistic write-lock) the transaction attempting the explicit lock waits for the outcome of the blocking transaction and, when it finishes, attempts to get the lock on the record again. This means that, if the blocking transaction committed a modified version of this record, an update conflict exception will be raised.

isc_tpb_read_committed + isc_tpb_nowait

If there is an active transaction holding ownership on this record (via explicit locking or normal update), an update conflict exception is raised immediately.

isc_tpb_read_committed + isc_tpb_wait

If there is an active transaction holding ownership on this record (via explicit locking or by a normal optimistic write-lock), the transaction attempting the explicit lock waits for the outcome of blocking transaction and when it finishes, attempts to get the lock on the record again.

Update conflict exceptions can never be raised by an explicit lock statement in this TPB mode.

Usage with a FOR UPDATE Clause

If the FOR UPDATE sub-clause precedes the WITH LOCK sub-clause, buffered fetches are suppressed. Thus, the lock will be applied to each row, one by one, at the moment it is fetched. It becomes possible, then, that a lock which appeared to succeed when requested will nevertheless fail subsequently, when an attempt is made to fetch a row which has become locked by another transaction in the meantime.

As an alternative, it may be possible in your access components to set the size of the fetch buffer to 1. This would enable you to process the currently-locked row before the next is fetched and locked, or to handle errors without rolling back your transaction.

OF <column_list>

This optional sub-clause does nothing at all.

See also

FOR UPDATE [OF]

How the engine deals with WITH LOCK

When an UPDATE statement tries to access a record that is locked by another transaction, it either raises an update conflict exception or waits for the locking transaction to finish, depending on TPB mode. Engine behaviour here is the same as if this record had already been modified by the locking transaction.

No special gdscodes are returned from conflicts involving pessimistic locks.

The engine guarantees that all records returned by an explicit lock statement are actually locked and do meet the search conditions specified in WHERE clause, as long as the search conditions do not depend on any other tables, via joins, subqueries, etc. It also guarantees that rows not meeting the search conditions will not be locked by the statement. It can not guarantee that there are no rows which, though meeting the search conditions, are not locked.

This situation can arise if other, parallel transactions commit their changes during the course of the locking statement’s execution.

The engine locks rows at fetch time. This has important consequences if you lock several rows at once. Many access methods for Firebird databases default to fetching output in packets of a few hundred rows (“buffered fetches”). Most data access components cannot bring you the rows contained in the last-fetched packet, when an error occurred.

Caveats using WITH LOCK
  • Rolling back of an implicit or explicit savepoint releases record locks that were taken under that savepoint, but it doesn’t notify waiting transactions. Applications should not depend on this behaviour as it may get changed in the future.

  • While explicit locks can be used to prevent and/or handle unusual update conflict errors, the volume of deadlock errors will grow unless you design your locking strategy carefully and control it rigorously.

  • Most applications do not need explicit locks at all. The main purposes of explicit locks are:

    1. to prevent expensive handling of update conflict errors in heavily loaded applications, and

    2. to maintain integrity of objects mapped to a relational database in a clustered environment.

    If your use of explicit locking doesn’t fall in one of these two categories, then it’s the wrong way to do the task in Firebird.

  • Explicit locking is an advanced feature; do not misuse it! While solutions for these kinds of problems may be very important for web sites handling thousands of concurrent writers, or for ERP/CRM systems operating in large corporations, most application programs do not need to work in such conditions.

Examples using explicit locking
  1. Simple:

    SELECT * FROM DOCUMENT WHERE ID=? WITH LOCK;
  2. Multiple rows, one-by-one processing with DSQL cursor:

    SELECT * FROM DOCUMENT WHERE PARENT_ID=?
      FOR UPDATE WITH LOCK;

6.1.14. INTO

Used for

Passing SELECT output into variables

Available in

PSQL

Syntax

In PSQL the INTO clause is placed at the very end of the SELECT statement.

SELECT [...] <column-list>
FROM ...
[...]
[INTO <variable-list>]

<variable-list> ::= [:]psqlvar [, [:]psqlvar ...]

The colon prefix before local variable names in PSQL is optional in the INTO clause.

In PSQL code (triggers, stored procedures and executable blocks), the results of a SELECT statement can be loaded row-by-row into local variables. It is often the only way to do anything with the returned values at all, unless an explicit or implicit cursor name is specified. The number, order and types of the variables must match the columns in the output row.

A “plain” SELECT statement can only be used in PSQL if it returns at most one row, i.e., if it is a singleton select. For multi-row selects, PSQL provides the FOR SELECT loop construct, discussed later in the PSQL chapter. PSQL also supports the DECLARE CURSOR statement, which binds a named cursor to a SELECT statement. The cursor can then be used to walk the result set.

Examples
  1. Selecting some aggregated values and passing them into previously declared variables min_amt, avg_amt and max_amt:

    select min(amount), avg(cast(amount as float)), max(amount)
      from orders
      where artno = 372218
      into min_amt, avg_amt, max_amt;

    The CAST serves to make the average a real number; otherwise, since amount is presumably an integer field, SQL rules would truncate it to the nearest lower integer.

  2. A PSQL trigger that retrieves two values as a BLOB field (using the LIST() function) and assigns it INTO a third field:

    select list(name, ', ')
      from persons p
      where p.id in (new.father, new.mother)
      into new.parentnames;

6.1.15. Common Table Expressions (“WITH …​ AS …​ SELECT”)

Available in

DSQL, PSQL

Syntax
<cte-construct> ::=
  <cte-defs>
  <main-query>

<cte-defs> ::= WITH [RECURSIVE] <cte> [, <cte> ...]

<cte> ::= name [(<column-list>)] AS (<cte-stmt>)

<column-list> ::= column-alias [, column-alias ...]
Table 79. Arguments for Common Table Expressions
Argument Description

cte-stmt

Any SELECT statement, including UNION

main-query

The main SELECT statement, which can refer to the CTEs defined in the preamble

name

Alias for a table expression

column-alias

Alias for a column in a table expression

A common table expression or CTE can be described as a virtual table or view, defined in a preamble to a main query, and going out of scope after the main query’s execution. The main query can reference any CTEs defined in the preamble as if they were regular tables or views. CTEs can be recursive, i.e. self-referencing, but they cannot be nested.

CTE Notes
  • A CTE definition can contain any legal SELECT statement, as long as it doesn’t have a “WITH…​” preamble of its own (no nesting).

  • CTEs defined for the same main query can reference each other, but care should be taken to avoid loops.

  • CTEs can be referenced from anywhere in the main query.

  • Each CTE can be referenced multiple times in the main query, using different aliases if necessary.

  • When enclosed in parentheses, CTE constructs can be used as subqueries in SELECT statements, but also in UPDATEs, MERGEs etc.

  • In PSQL, CTEs are also supported in FOR loop headers:

    for
      with my_rivers as (select * from rivers where owner = 'me')
        select name, length from my_rivers into :rname, :rlen
    do
    begin
      ..
    end

In Firebird 3.0.2 and earlier, if a CTE is declared, it must be used later: otherwise, you will get an error like this: “CTE "AAA" is not used in query”.

This restriction was removed in Firebird 3.0.3.

Example
with dept_year_budget as (
  select fiscal_year,
         dept_no,
         sum(projected_budget) as budget
  from proj_dept_budget
  group by fiscal_year, dept_no
)
select d.dept_no,
       d.department,
       dyb_2008.budget as budget_08,
       dyb_2009.budget as budget_09
from department d
     left join dept_year_budget dyb_2008
       on d.dept_no = dyb_2008.dept_no
       and dyb_2008.fiscal_year = 2008
     left join dept_year_budget dyb_2009
       on d.dept_no = dyb_2009.dept_no
       and dyb_2009.fiscal_year = 2009
where exists (
  select * from proj_dept_budget b
  where d.dept_no = b.dept_no
);
Recursive CTEs

A recursive (self-referencing) CTE is a UNION which must have at least one non-recursive member, called the anchor. The non-recursive member(s) must be placed before the recursive member(s). Recursive members are linked to each other and to their non-recursive neighbour by UNION ALL operators. The unions between non-recursive members may be of any type.

Recursive CTEs require the RECURSIVE keyword to be present right after WITH. Each recursive union member may reference itself only once, and it must do so in a FROM clause.

A great benefit of recursive CTEs is that they use far less memory and CPU cycles than an equivalent recursive stored procedure.

Execution Pattern

The execution pattern of a recursive CTE is as follows:

  • The engine begins execution from a non-recursive member.

  • For each row evaluated, it starts executing each recursive member one by one, using the current values from the outer row as parameters.

  • If the currently executing instance of a recursive member produces no rows, execution loops back one level and gets the next row from the outer result set.

Example of recursive CTEs
WITH RECURSIVE DEPT_YEAR_BUDGET AS (
  SELECT
      FISCAL_YEAR,
      DEPT_NO,
      SUM(PROJECTED_BUDGET) BUDGET
  FROM PROJ_DEPT_BUDGET
  GROUP BY FISCAL_YEAR, DEPT_NO
),
DEPT_TREE AS (
  SELECT
      DEPT_NO,
      HEAD_DEPT,
      DEPARTMENT,
      CAST('' AS VARCHAR(255)) AS INDENT
  FROM DEPARTMENT
  WHERE HEAD_DEPT IS NULL
  UNION ALL
  SELECT
      D.DEPT_NO,
      D.HEAD_DEPT,
      D.DEPARTMENT,
      H.INDENT || ' '
  FROM DEPARTMENT D
    JOIN DEPT_TREE H ON H.HEAD_DEPT = D.DEPT_NO
)
SELECT
    D.DEPT_NO,
    D.INDENT || D.DEPARTMENT DEPARTMENT,
    DYB_2008.BUDGET AS BUDGET_08,
    DYB_2009.BUDGET AS BUDGET_09
FROM DEPT_TREE D
    LEFT JOIN DEPT_YEAR_BUDGET DYB_2008 ON
      (D.DEPT_NO = DYB_2008.DEPT_NO) AND
      (DYB_2008.FISCAL_YEAR = 2008)
    LEFT JOIN DEPT_YEAR_BUDGET DYB_2009 ON
      (D.DEPT_NO = DYB_2009.DEPT_NO) AND
      (DYB_2009.FISCAL_YEAR = 2009);

The next example returns the pedigree of a horse. The main difference is that recursion occurs simultaneously in two branches of the pedigree.

WITH RECURSIVE PEDIGREE (
  CODE_HORSE,
  CODE_FATHER,
  CODE_MOTHER,
  NAME,
  MARK,
  DEPTH)
AS (SELECT
      HORSE.CODE_HORSE,
      HORSE.CODE_FATHER,
      HORSE.CODE_MOTHER,
      HORSE.NAME,
      CAST('' AS VARCHAR(80)),
      0
    FROM
      HORSE
    WHERE
      HORSE.CODE_HORSE = :CODE_HORSE
    UNION ALL
    SELECT
      HORSE.CODE_HORSE,
      HORSE.CODE_FATHER,
      HORSE.CODE_MOTHER,
      HORSE.NAME,
      'F' || PEDIGREE.MARK,
      PEDIGREE.DEPTH + 1
    FROM
      HORSE
      JOIN PEDIGREE
        ON HORSE.CODE_HORSE = PEDIGREE.CODE_FATHER
    WHERE
      PEDIGREE.DEPTH < :MAX_DEPTH
    UNION ALL
    SELECT
      HORSE.CODE_HORSE,
      HORSE.CODE_FATHER,
      HORSE.CODE_MOTHER,
      HORSE.NAME,
      'M' || PEDIGREE.MARK,
      PEDIGREE.DEPTH + 1
    FROM
      HORSE
      JOIN PEDIGREE
        ON HORSE.CODE_HORSE = PEDIGREE.CODE_MOTHER
    WHERE
      PEDIGREE.DEPTH < :MAX_DEPTH
)
SELECT
  CODE_HORSE,
  NAME,
  MARK,
  DEPTH
FROM
  PEDIGREE
Notes on recursive CTEs
  • Aggregates (DISTINCT, GROUP BY, HAVING) and aggregate functions (SUM, COUNT, MAX etc) are not allowed in recursive union members.

  • A recursive reference cannot participate in an outer join.

  • The maximum recursion depth is 1024.

6.2. INSERT

Used for

Inserting rows of data into a table

Available in

DSQL, ESQL, PSQL

Syntax
INSERT INTO target
  {DEFAULT VALUES | [(<column_list>)] <value_source>}
  [RETURNING <returning_list> [INTO <variables>]]

<column_list> ::= colname [, colname ...]

<value_source> ::= VALUES (<value_list>) | <select_stmt>

<value_list> ::= <value> [, <value> ...]

<returning_list> ::=
  <ret_value> [[AS] ret_alias] [, <ret_value> [[AS] ret_alias] ...]

<ret_value> ::= { colname | target.colname | <value> }

<variables> ::= [:]varname [, [:]varname ...]
Table 80. Arguments for the INSERT Statement Parameters
Argument Description

target

The name of the table or view to which a new row, or batch of rows, should be added

colname

Column in the table or view

value

An expression whose value is used for inserting into the table or for returning

ret_value

The expression to be returned in the RETURNING clause

varname

Name of a PSQL local variable

The INSERT statement is used to add rows to a table or to one or more tables underlying a view:

  • If the column values are supplied in a VALUES clause, exactly one row is inserted

  • The values may be provided instead by a SELECT expression, in which case zero to many rows may be inserted

  • With the DEFAULT VALUES clause, no values are provided at all and exactly one row is inserted.

Restrictions
  • Columns returned to the NEW.column_name context variables in triggers should not have a colon (“:”) prefixed to their names

  • No column may appear more than once in the column list.

ALERT : BEFORE INSERT Triggers

Regardless of the method used for inserting rows, be mindful of any columns in the target table or view that are populated by BEFORE INSERT triggers, such as primary keys and case-insensitive search columns. Those columns should be excluded from both the column_list and the VALUES list if, as they should, the triggers test the NEW.column_name for NULL.

6.2.1. INSERT …​ VALUES

The VALUES list must provide a value for every column in the column list, in the same order and of the correct type. The column list need not specify every column in the target but, if the column list is absent, the engine requires a value for every column in the table or view (computed columns excluded).

Introducer syntax provides a way to identify the character set of a value that is a string constant (literal). Introducer syntax works only with literal strings: it cannot be applied to string variables, parameters, column references or values that are expressions.

Examples
INSERT INTO cars (make, model, year)
VALUES ('Ford', 'T', 1908);

INSERT INTO cars
VALUES ('Ford', 'T', 1908, 'USA', 850);

-- notice the '_' prefix (introducer syntax)
INSERT INTO People
VALUES (_ISO8859_1 'Hans-Jörg Schäfer');

6.2.2. INSERT …​ SELECT

For this method of inserting, the output columns of the SELECT statement must provide a value for every target column in the column list, in the same order and of the correct type.

Literal values, context variables or expressions of compatible type can be substituted for any column in the source row. In this case, a source column list and a corresponding VALUES list are required.

If the column list is absent — as it is when SELECT * is used for the source expression — the column_list must contain the names of every column in the target table or view (computed columns excluded).

Examples
INSERT INTO cars (make, model, year)
  SELECT make, model, year
  FROM new_cars;

INSERT INTO cars
  SELECT * FROM new_cars;

INSERT INTO Members (number, name)
  SELECT number, name FROM NewMembers
    WHERE Accepted = 1
UNION ALL
  SELECT number, name FROM SuspendedMembers
    WHERE Vindicated = 1

INSERT INTO numbers(num)
  WITH RECURSIVE r(n) as (
    SELECT 1 FROM rdb$database
    UNION ALL
    SELECT n+1 FROM r WHERE n < 100
  )
SELECT n FROM r

Of course, the column names in the source table need not be the same as those in the target table. Any type of SELECT statement is permitted, as long as its output columns exactly match the insert columns in number, order and type. Types need not be exactly the same, but they must be assignment-compatible.

When using and INSERT …​ SELECT with a RETURNING clause, the SELECT has to produce at most one row, as RETURNING currently only works for statements affecting at most one row.

This behaviour may change in future Firebird versions.

6.2.3. INSERT …​ DEFAULT VALUES

The DEFAULT VALUES clause allows insertion of a record without providing any values at all, either directly or from a SELECT statement. This is only possible if every NOT NULL or CHECKed column in the table either has a valid default declared or gets such a value from a BEFORE INSERT trigger. Furthermore, triggers providing required field values must not depend on the presence of input values.

Example
INSERT INTO journal
  DEFAULT VALUES
RETURNING entry_id;

6.2.4. The RETURNING clause

An INSERT statement adding at most one row may optionally include a RETURNING clause in order to return values from the inserted row. The clause, if present, need not contain all of the insert columns and may also contain other columns or expressions. The returned values reflect any changes that may have been made in BEFORE INSERT triggers.

The optional INTO sub-clause is only valid in PSQL.

Multiple INSERTs

In DSQL, a statement with RETURNING always returns only one row. If the RETURNING clause is specified and more than one row is inserted by the INSERT statement, the statement fails and an error message is returned. This behaviour may change in future Firebird versions.

Examples
INSERT INTO Scholars (
  firstname,
  lastname,
  address,
  phone,
  email)
VALUES (
  'Henry',
  'Higgins',
  '27A Wimpole Street',
  '3231212',
  NULL)
RETURNING lastname, fullname, id;

INSERT INTO Dumbbells (firstname, lastname, iq)
  SELECT fname, lname, iq
FROM Friends
  ORDER BY iq ROWS 1
  RETURNING id, firstname, iq
INTO :id, :fname, :iq;
Notes
  • RETURNING is supported for VALUES and DEFAULT VALUES inserts, and singleton SELECT inserts.

  • In DSQL, a statement with a RETURNING clause always returns exactly one row. If no record was actually inserted, the fields in this row are all NULL. This behaviour may change in a later version of Firebird. In PSQL, if no row was inserted, nothing is returned, and the target variables keep their existing values.

6.2.5. Inserting into BLOB columns

Inserting into BLOB columns is only possible under the following circumstances:

  1. The client application has made special provisions for such inserts, using the Firebird API. In this case, the modus operandi is application-specific and outside the scope of this manual.

  2. The value inserted is a string literal of no more than 65,533 bytes (64KB - 3).

    A limit, in characters, is calculated at run-time for strings that are in multi-byte character sets, to avoid overrunning the bytes limit. For example, for a UTF8 string (max. 4 bytes/character), the run-time limit is likely to be about (floor(65533/4)) = 16383 characters.

  3. You are using the “INSERT …​ SELECT” form and one or more columns in the result set are BLOBs.

6.3. UPDATE

Used for

Modifying rows in tables and views

Available in

DSQL, ESQL, PSQL

Syntax
UPDATE target [[AS] alias]
  SET col = <value> [, col = <value> ...]
  [WHERE {<search-conditions> | CURRENT OF cursorname}]
  [PLAN <plan_items>]
  [ORDER BY <sort_items>]
  [ROWS m [TO n]]
  [RETURNING <returning_list> [INTO <variables>]]

<returning_list> ::=
  <ret_value> [[AS] ret_alias] [, <ret_value> [[AS] ret_alias] ...]

<ret_value> ::=
    colname
  | table_or_alias.colname
  | NEW.colname
  | OLD.colname
  | <value>

<variables> ::= [:]varname [, [:]varname ...]
Table 81. Arguments for the UPDATE Statement Parameters
Argument Description

target

The name of the table or view where the records are updated

alias

Alias for the table or view

col

Name or alias of a column in the table or view

value

Expression for the new value for a column that is to be updated in the table or view by the statement, or a value to be returned

search-conditions

A search condition limiting the set of the rows to be updated

cursorname

The name of the cursor through which the row(s) to be updated are positioned

plan_items

Clauses in the query plan

sort_items

Columns listed in an ORDER BY clause

m, n

Integer expressions for limiting the number of rows to be updated

ret_value

A value to be returned in the RETURNING clause

varname

Name of a PSQL local variable

The UPDATE statement changes values in a table or in one or more of the tables that underlie a view. The columns affected are specified in the SET clause. The rows affected may be limited by the WHERE and ROWS clauses. If neither WHERE nor ROWS is present, all the records in the table will be updated.

6.3.1. Using an alias

If you assign an alias to a table or a view, the alias must be used when specifying columns and also in any column references included in other clauses.

Example

Correct usage:

update Fruit set soort = 'pisang' where ...

update Fruit set Fruit.soort = 'pisang' where ...

update Fruit F set soort = 'pisang' where ...

update Fruit F set F.soort = 'pisang' where ...

Not possible:

update Fruit F set Fruit.soort = 'pisang' where ...

6.3.2. The SET Clause

In the SET clause, the assignment phrases, containing the columns with the values to be set, are separated by commas. In an assignment phrase, column names are on the left and the values or expressions containing the assignment values are on the right. A column may be included only once in the SET clause.

A column name can be used in expressions on the right. The old value of the column will always be used in these right-side values, even if the column was already assigned a new value earlier in the SET clause.

Here is an example

Data in the TSET table:

A B
---
1 0
2 0

The statement:

UPDATE tset SET a = 5, b = a;

will change the values to:

A B
---
5 1
5 2

Notice that the old values (1 and 2) are used to update the b column even after the column was assigned a new value (5).

Before Firebird 2.5, columns got their new values immediately upon assignment. It was non-standard behaviour that was fixed in Firebird 2.5.

6.3.3. The WHERE Clause

The WHERE clause sets the conditions that limit the set of records for a searched update.

In PSQL, if a named cursor is being used for updating a set, using the WHERE CURRENT OF clause, the action is limited to the row where the cursor is currently positioned. This is a positioned update.

The WHERE CURRENT OF clause is available only in PSQL, since there is no statement for creating and manipulating an explicit cursor in DSQL. Searched updates are also available in PSQL, of course.

Examples
UPDATE People
  SET firstname = 'Boris'
  WHERE lastname = 'Johnson';

UPDATE employee e
  SET salary = salary * 1.05
  WHERE EXISTS(
         SELECT *
           FROM employee_project ep
           WHERE e.emp_no = ep.emp_no);

UPDATE addresses
  SET city = 'Saint Petersburg', citycode = 'PET'
  WHERE city = 'Leningrad'

UPDATE employees
  SET salary = 2.5 * salary
  WHERE title = 'CEO'

For string literals with which the parser needs help to interpret the character set of the data, the introducer syntax may be used. The string literal is preceded by the character set name, prefixed with an underscore character:

-- notice the '_' prefix

UPDATE People
SET name = _ISO8859_1 'Hans-Jörg Schäfer'
WHERE id = 53662;

6.3.4. The ORDER BY and ROWS Clauses

The ORDER BY and ROWS clauses make sense only when used together. However, they can be used separately.

If ROWS has one argument, m, the rows to be updated will be limited to the first m rows.

Points to note
  • If m > the number of rows being processed, the entire set of rows is updated

  • If m = 0, no rows are updated

  • If m < 0, an error occurs and the update fails

If two arguments are used, m and n, ROWS limits the rows being updated to rows from m to n inclusively. Both arguments are integers and start from 1.

Points to note
  • If m > the number of rows being processed, no rows are updated

  • If n > the number of rows, rows from m to the end of the set are updated

  • If m < 1 or n < 1, an error occurs and the update fails

  • If n = m - 1, no rows are updated

  • If n < m -1, an error occurs and the update fails

ROWS Example
UPDATE employees
SET salary = salary + 50
ORDER BY salary ASC
ROWS 20;

6.3.5. The RETURNING Clause

An UPDATE statement involving at most one row may include RETURNING in order to return some values from the row being updated. RETURNING may include data from any column of the row, not necessarily the columns that are currently being updated. It can include literals or expressions not associated with columns, if there is a need for that.

When the RETURNING set contains data from the current row, the returned values report changes made in the BEFORE UPDATE triggers, but not those made in AFTER UPDATE triggers.

The context variables OLD.fieldname and NEW.fieldname can be used as column names. If OLD. or NEW. is not specified, the column values returned are the NEW. ones.

In DSQL, a statement with RETURNING always returns a single row. Attempts to execute an UPDATE …​ RETURNING …​ that affects multiple rows will result in the error “multiple rows in singleton select”. If the statement updates no records, the returned values contain NULL. This behaviour may change in future Firebird versions.

The INTO Sub-clause

In PSQL, the INTO clause can be used to pass the returning values to local variables. It is not available in DSQL. If no records are updated, nothing is returned and variables specified in RETURNING will keep their previous values.

RETURNING Example (DSQL)
UPDATE Scholars
SET firstname = 'Hugh', lastname = 'Pickering'
WHERE firstname = 'Henry' and lastname = 'Higgins'
RETURNING id, old.lastname, new.lastname;

6.3.6. Updating BLOB columns

Updating a BLOB column always replaces the entire contents. Even the BLOB ID, the “handle” that is stored directly in the column, is changed. BLOBs can be updated if:

  1. The client application has made special provisions for this operation, using the Firebird API. In this case, the modus operandi is application-specific and outside the scope of this manual.

  2. The new value is a string literal of no more than 65,533 bytes (64KB - 3).

    A limit, in characters, is calculated at run-time for strings that are in multi-byte character sets, to avoid overrunning the bytes limit. For example, for a UTF8 string (max. 4 bytes/character), the run-time limit is likely to be about (floor(65533/4)) = 16383 characters.

  3. The source is itself a BLOB column or, more generally, an expression that returns a BLOB.

  4. You use the INSERT CURSOR statement (ESQL only).

6.4. UPDATE OR INSERT

Used for

Updating an existing record in a table or, if it does not exist, inserting it

Available in

DSQL, PSQL

Syntax
UPDATE OR INSERT INTO
  target [(<column_list>)]
  VALUES (<value_list>)
  [MATCHING (<column_list>)]
  [RETURNING <values> [INTO <variables>]]

<column_list> ::= colname  [, colname ...]

<value_list> ::= <value> [, <value> ...]

<returning_list> ::= <ret_value> [, <ret_value> ...]

<ret_value> ::=
    colname
  | NEW.colname
  | OLD.colname
  | <value>

<variables> ::= [:]varname [, [:]varname ...]
Table 82. Arguments for the UPDATE OR INSERT Statement Parameters
Argument Description

target

The name of the table or view where the record(s) is to be updated or a new record inserted

colname

Name of a column in the table or view

value

An expression whose value is to be used for inserting or updating the table, or returning a value

ret_value

An expression returned in the RETURNING clause

varname

Variable name — PSQL only

UPDATE OR INSERT inserts a new record or updates one or more existing records. The action taken depends on the values provided for the columns in the MATCHING clause (or, if the latter is absent, in the primary key). If there are records found matching those values, they are updated. If not, a new record is inserted. A match only counts if all the values in the MATCHING or primary key columns are equal. Matching is done with the IS NOT DISTINCT operator, so one NULL matches another.

Restrictions
  • If the table has no primary key, the MATCHING clause is mandatory.

  • In the MATCHING list as well as in the update/insert column list, each column name may occur only once.

  • The “INTO <variables>” subclause is only available in PSQL.

  • When values are returned into the context variable NEW, this name must not be preceded by a colon (“:”).

6.4.1. The RETURNING clause

The optional RETURNING clause, if present, need not contain all the columns mentioned in the statement and may also contain other columns or expressions. The returned values reflect any changes that may have been made in BEFORE triggers, but not those in AFTER triggers. OLD.fieldname and NEW.fieldname may both be used in the list of columns to return; for field names not preceded by either of these, the new value is returned.

In DSQL, a statement with a RETURNING clause always returns exactly one row. If a RETURNING clause is present and more than one matching record is found, an error “multiple rows in singleton select” is raised. This behaviour may change in a later version of Firebird.

The optional INTO sub-clause is only valid in PSQL.

6.4.2. Example of UPDATE OR INSERT

Modifying data in a table, using UPDATE OR INSERT in a PSQL module. The return value is passed to a local variable, whose colon prefix is optional.

UPDATE OR INSERT INTO Cows (Name, Number, Location)
  VALUES ('Suzy Creamcheese', 3278823, 'Green Pastures')
  MATCHING (Number)
  RETURNING rec_id into :id;

6.5. DELETE

Used for

Deleting rows from a table or view

Available in

DSQL, ESQL, PSQL

Syntax
DELETE
  FROM target [[AS] alias]
  [WHERE {<search-conditions> | CURRENT OF cursorname}]
  [PLAN <plan_items>]
  [ORDER BY <sort_items>]
  [ROWS m [TO n]]
  [RETURNING <returning_list> [INTO <variables>]]

<returning_list> ::=
  <ret_value> [[AS] ret_alias] [, <ret_value> [[AS] ret_alias] ...]

<ret_value> ::=
  { colname | target_or_alias.colname | <value> }

<variables> ::=
  [:]varname [, [:]varname ...]
Table 83. Arguments for the DELETE Statement Parameters
Argument Description

target

The name of the table or view from which the records are to be deleted

alias

Alias for the target table or view

search-conditions

Search condition limiting the set of rows being targeted for deletion

cursorname

The name of the cursor in which current record is positioned for deletion

plan_items

Query plan clause

sort_items

ORDER BY clause

m, n

Integer expressions for limiting the number of rows being deleted

ret_value

An expression to be returned in the RETURNING clause

value

An expression whose value is used for returning

varname

Name of a PSQL variable

DELETE removes rows from a database table or from one or more of the tables that underlie a view. WHERE and ROWS clauses can limit the number of rows deleted. If neither WHERE nor ROWS is present, DELETE removes all the rows in the relation.

6.5.1. Aliases

If an alias is specified for the target table or view, it must be used to qualify all field name references in the DELETE statement.

Examples

Supported usage:

delete from Cities where name starting 'Alex';

delete from Cities where Cities.name starting 'Alex';

delete from Cities C where name starting 'Alex';

delete from Cities C where C.name starting 'Alex';

Not possible:

delete from Cities C where Cities.name starting 'Alex';

6.5.2. WHERE

The WHERE clause sets the conditions that limit the set of records for a searched delete.

In PSQL, if a named cursor is being used for deleting a set, using the WHERE CURRENT OF clause, the action is limited to the row where the cursor is currently positioned. This is a positioned delete.

The WHERE CURRENT OF clause is available only in PSQL and ESQL, since there is no statement for creating and manipulating an explicit cursor in DSQL. Searched deletes are also available in PSQL, of course.

Examples
DELETE FROM People
  WHERE firstname <> 'Boris' AND lastname <> 'Johnson';

DELETE FROM employee e
  WHERE NOT EXISTS(
    SELECT *
    FROM employee_project ep
     WHERE e.emp_no = ep.emp_no);

DELETE FROM Cities
  WHERE CURRENT OF Cur_Cities;  -- ESQL and PSQL only

6.5.3. PLAN

A PLAN clause allows the user to optimize the operation manually.

Example
DELETE FROM Submissions
  WHERE date_entered < '1-Jan-2002'
  PLAN (Submissions INDEX ix_subm_date);

6.5.4. ORDER BY and ROWS

The ORDER BY clause orders the set before the actual deletion takes place. It only makes sense in combination with ROWS, but is also valid without it.

The ROWS clause limits the number of rows being deleted. Integer literals or any integer expressions can be used for the arguments m and n.

If ROWS has one argument, m, the rows to be deleted will be limited to the first m rows.

Points to note
  • If m > the number of rows being processed, the entire set of rows is deleted

  • If m = 0, no rows are deleted

  • If m < 0, an error occurs and the deletion fails

If two arguments are used, m and n, ROWS limits the rows being deleted to rows from m to n inclusively. Both arguments are integers and start from 1.

Points to note
  • If m > the number of rows being processed, no rows are deleted

  • If m > 0 and <= the number of rows in the set and n is outside these values, rows from m to the end of the set are deleted

  • If m < 1 or n < 1, an error occurs and the deletion fails

  • If n = m - 1, no rows are deleted

  • If n < m -1, an error occurs and the deletion fails

Examples

Deleting the oldest purchase:

DELETE FROM Purchases
  ORDER BY date ROWS 1;

Deleting the highest custno(s):

DELETE FROM Sales
  ORDER BY custno DESC ROWS 1 to 10;

Deleting all sales, ORDER BY clause pointless:

DELETE FROM Sales
  ORDER BY custno DESC;

Deleting one record starting from the end, i.e. from Z…​:

DELETE FROM popgroups
  ORDER BY name DESC ROWS 1;

Deleting the five oldest groups:

DELETE FROM popgroups
  ORDER BY formed ROWS 5;

No sorting (ORDER BY) is specified so 8 found records, starting from the fifth one, will be deleted:

DELETE FROM popgroups
  ROWS 5 TO 12;

6.5.5. RETURNING

A DELETE statement removing at most one row may optionally include a RETURNING clause in order to return values from the deleted row. The clause, if present, need not contain all the relation’s columns and may also contain other columns or expressions.

  • In DSQL, a statement with RETURNING always returns a singleton, never a multi-row set. If a RETURNING clause is present and more than one matching record is found, an error “multiple rows in singleton select” is raised. If no records are deleted, the returned columns contain NULL. This behaviour may change in future Firebird versions

  • The INTO clause is available only in PSQL

    • If the row is not deleted, nothing is returned and the target variables keep their values

Examples
DELETE FROM Scholars
  WHERE firstname = 'Henry' and lastname = 'Higgins'
  RETURNING lastname, fullname, id;

DELETE FROM Dumbbells
  ORDER BY iq DESC
  ROWS 1
  RETURNING lastname, iq into :lname, :iq;

6.6. MERGE

Used for

Merging data from a source set into a target relation

Available in

DSQL, PSQL

Syntax
MERGE INTO target [[AS] target_alias]
  USING <source> [[AS] source_alias]
  ON <join_condition>
  <merge_when> [<merge_when> ...]
  [RETURNING <returning_list> [INTO <variables>]]

<merge_when> ::=
    <merge_when_matched>
  | <merge_when_not_matched>

<merge_when_matched> ::=
  WHEN MATCHED [ AND <condition> ] THEN
  { UPDATE SET <assignment-list>
  | DELETE }

<merge_when_not_matched> ::=
  WHEN NOT MATCHED [ AND <condition> ] THEN
  INSERT [( <column_list> )] VALUES ( <value_list> )

<source> ::= tablename | (<select_stmt>)

<assignment_list ::=
  colname = <value> [, <colname> = <value> ...]]

<column_list> ::= colname [, colname ...]

<value_list> ::= <value> [, <value> ...]

<returning_list> ::=
  <ret_value> [[AS] ret_alias] [, <ret_value> [[AS] ret_alias] ...]

<ret_value> ::=
    colname
  | table_or_alias.colname
  | NEW.colname
  | OLD.colname
  | <value>

<variables> ::=
  [:]varname [, [:]varname ...]
Table 84. Arguments for the MERGE Statement Parameters
Argument Description

target

Name of target relation (table or updatable view)

source

Data source. It can be a table, a view, a stored procedure or a derived table

target_alias

Alias for the target relation (table or updatable view)

source_alias

Alias for the source relation or set

join_conditions

The (ON) condition(s) for matching the source records with those in the target

condition

Additional test condition in WHEN MATCHED or WHEN NOT MATCHED clause

tablename

Table or view name

select_stmt

Select statement of the derived table

colname

Name of a column in the target relation

value

The value assigned to a column in the target table. This expression may be a literal value, a PSQL variable, a column from the source, or a compatible context variable

ret_value

The expression to be returned in the RETURNING clause Can be a column reference to source or target, or a column reference of the NEW or OLD context of the target, or a value.

ret_alias

Alias for the value expression in the RETURNING clause

varname

Name of a PSQL local variable

The MERGE statement merges records from the source into a target table or updatable view. The source may be a table, view or “anything you can SELECT from” in general. Each source record will be used to update one or more target records, insert a new record in the target table, delete a record from the target table or do nothing.

The action taken depends on the supplied join condition, the WHEN clause(s), and the - optional - condition in the WHEN clause. The join condition and condition in the WHEN will typically contain a comparison of fields in the source and target relations.

Multiple WHEN MATCHED and WHEN NOT MATCHED clauses are allowed. For each row in the source, the WHEN clauses are checked in the order they are specified in the statement. If the condition in the WHEN clause does not evaluate to true, the clause is skipped, and the next clause will be checked. This will be done until the condition for a WHEN clause evaluates to true, or a WHEN clauses without condition matches, or there are no more WHEN clauses. If a matching clause is found, the action associated with the clause is executed. For each row in the source, at most one action is executed.

At least one WHEN clause must be present.

WHEN NOT MATCHED is evaluated from the source viewpoint, that is, the table or set specified in USING. It has to work this way because if the source record does not match a target record, INSERT is executed. Of course, if there is a target record which does not match a source record, nothing is done.

Currently, the ROW_COUNT variable returns the value 1, even if more than one record is modified or inserted. For details and progress, refer to Tracker ticket CORE-4400.

ALERT : Another irregularity!

If the WHEN MATCHED clause is present and several records match a single record in the target table, an UPDATE will be executed on that one target record for each one of the matching source records, with each successive update overwriting the previous one. This behaviour does not comply with the SQL:2003 standard, which requires that this situation throw an exception (an error).

This has been fixed in Firebird 4, and will raise an error instead. See also CORE-2274

6.6.1. The RETURNING Clause

A MERGE statement that affects at most one row can contain a RETURNING clause to return values added, modified or removed. If a RETURNING clause is present and more than one matching record is found, an error “multiple rows in singleton select” is raised. The RETURNING clause can contain any columns from the target table (or updateable view), as well as other columns (eg from the source) and expressions.

The optional INTO sub-clause is only valid in PSQL.

The restriction that RETURNING can only be used with a statement that affects at most one row might be removed in a future version.

Column names can be qualified by the OLD or NEW prefix to define exactly what value to return: before or after modification. The returned values include the changes made by BEFORE triggers.

For the UPDATE or INSERT action, unqualified column names or those qualified by the target table name or alias will behave as if qualified by NEW, while for the DELETE action as if qualified by OLD.

The following example modifies the previous example to affect one line, and adds a RETURNING clause to return the old and new quantity of goods, and the difference between those values.

Using MERGE with a RETURNING clause
MERGE INTO PRODUCT_INVENTORY AS TARGET
USING (
  SELECT
    SL.ID_PRODUCT,
    SUM(SL.QUANTITY)
  FROM SALES_ORDER_LINE SL
  JOIN SALES_ORDER S ON S.ID = SL.ID_SALES_ORDER
  WHERE S.BYDATE = CURRENT_DATE
  AND SL.ID_PRODUCT =: ID_PRODUCT
  GROUP BY 1
) AS SRC (ID_PRODUCT, QUANTITY)
ON TARGET.ID_PRODUCT = SRC.ID_PRODUCT
WHEN MATCHED AND TARGET.QUANTITY - SRC.QUANTITY <= 0 THEN
  DELETE
WHEN MATCHED THEN
  UPDATE SET
    TARGET.QUANTITY = TARGET.QUANTITY - SRC.QUANTITY,
    TARGET.BYDATE = CURRENT_DATE
RETURNING OLD.QUANTITY, NEW.QUANTITY, SRC.QUANTITY
INTO : OLD_QUANTITY, :NEW_QUANTITY, :DIFF_QUANTITY

6.6.2. Examples of MERGE

  1. Update books when present, or add new record if absent

    MERGE INTO books b
      USING purchases p
      ON p.title = b.title and p.type = 'bk'
      WHEN MATCHED THEN
        UPDATE SET b.desc = b.desc || '; ' || p.desc
      WHEN NOT MATCHED THEN
        INSERT (title, desc, bought) values (p.title, p.desc, p.bought);
  2. Using a derived table

    MERGE INTO customers c
      USING (SELECT * from customers_delta WHERE id > 10) cd
      ON (c.id = cd.id)
      WHEN MATCHED THEN
        UPDATE SET name = cd.name
      WHEN NOT MATCHED THEN
        INSERT (id, name) values (cd.id, cd.name);
  3. Together with a recursive CTE

    MERGE INTO numbers
      USING (
        WITH RECURSIVE r(n) AS (
          SELECT 1 FROM rdb$database
          UNION ALL
          SELECT n+1 FROM r WHERE n < 200
        )
        SELECT n FROM r
      ) t
      ON numbers.num = t.n
      WHEN NOT MATCHED THEN
        INSERT(num) VALUES(t.n);
  4. Using DELETE clause

    MERGE INTO SALARY_HISTORY
    USING (
      SELECT EMP_NO
      FROM EMPLOYEE
      WHERE DEPT_NO = 120) EMP
    ON SALARY_HISTORY.EMP_NO = EMP.EMP_NO
    WHEN MATCHED THEN DELETE
  5. The following example updates the PRODUCT_INVENTORY table daily based on orders processed in the SALES_ORDER_LINE table. If the stock level of the product would drop to zero or lower, then the row for that product is removed from the PRODUCT_INVENTORY table.

    MERGE INTO PRODUCT_INVENTORY AS TARGET
    USING (
      SELECT
        SL.ID_PRODUCT,
        SUM (SL.QUANTITY)
      FROM SALES_ORDER_LINE SL
      JOIN SALES_ORDER S ON S.ID = SL.ID_SALES_ORDER
      WHERE S.BYDATE = CURRENT_DATE
      GROUP BY 1
    ) AS SRC (ID_PRODUCT, QUANTITY)
    ON TARGET.ID_PRODUCT = SRC.ID_PRODUCT
    WHEN MATCHED AND TARGET.QUANTITY - SRC.QUANTITY <= 0 THEN
      DELETE
    WHEN MATCHED THEN
      UPDATE SET
        TARGET.QUANTITY = TARGET.QUANTITY - SRC.QUANTITY,
        TARGET.BYDATE = CURRENT_DATE