The source of much copied reference material: Paul Vinkenoog
Copyright © 2017-2024 Firebird Project and all contributing authors, under the Public Documentation License Version 1.0.
Please refer to the License Notice in the Appendix
If you notice any discrepancies, errors or anything missing, please report this on https://github.com/FirebirdSQL/firebird-documentation/issues or submit a pull request with the necessary changes. |
1. About the Firebird 5.0 Language Reference
This Language Reference describes the SQL language supported by Firebird 5.0.
This Firebird 5.0 Language Reference is the fourth comprehensive manual to cover all aspects of the query language used by developers to communicate, through their applications, with the Firebird relational database management system.
1.1. Subject
The subject of this volume is Firebird’s implementation of the SQL (“Structured Query Language”) relational database language. Firebird conforms closely with international standards for SQL, from data type support, data storage structures, referential integrity mechanisms, to data manipulation capabilities and access privileges. Firebird also implements a robust procedural language — procedural SQL (PSQL) — for stored procedures, stored functions, triggers, and dynamically-executable code blocks. These areas are addressed in this volume.
This document does not cover configuration of Firebird, Firebird command-line tools, nor its programming APIs. See Firebird RDBMS, and specifically Reference Manuals for more Firebird documentation.
1.2. Authorship
For the Firebird 5.0 version, the Firebird 4.0 Language Reference was taken as the base, and Firebird 5.0 information was added based on the Firebird 5.0 Release Notes and feature documentation.
1.2.1. Contributors
Direct Content
-
Dmitry Filippov (writer)
-
Alexander Karpeykin (writer)
-
Alexey Kovyazin (writer, editor)
-
Dmitry Kuzmenko (writer, editor)
-
Denis Simonov (writer, editor)
-
Paul Vinkenoog (writer, designer)
-
Dmitry Yemanov (writer)
-
Mark Rotteveel (writer, editor)
Resource Content
-
Adriano dos Santos Fernandes
-
Alexander Peshkov
-
Vladyslav Khorsun
-
Claudio Valderrama
-
Helen Borrie
-
… and others
1.3. Reporting Errors or Missing Content
If you find errors, missing content or other problems in this document, please report this in our issue tracker of the firebird-documentation GitHub repository.
Pull requests with changes and fixes are also much appreciated.
1.4. Acknowledgments
Sponsors and Other Donors
See also the Acknowledgements in the Firebird 2.5 Language Reference for the sponsors of the initial Russian version and its translation.
Sponsors of the Russian Language Reference Manual
-
Moscow Exchange (Russia)
Moscow Exchange is the largest exchange holding in Russia and Eastern Europe, founded on December 19, 2011, through the consolidation of the MICEX (founded in 1992) and RTS (founded in 1995) exchange groups. Moscow Exchange ranks among the world’s top 20 exchanges by trading in bonds and by the total capitalization of shares traded, as well as among the 10 largest exchange platforms for trading derivatives.
-
Technical support and developer of administrator tools for the Firebird DBMS.
1.5. Contributing
There are several ways you can contribute to the documentation of Firebird, or Firebird in general:
-
Participate on the mailing lists
-
Report bugs or submit pull requests on the FirebirdSQL GitHub Project
-
Become a developer (contact us on firebird-devel)
-
Donate to the Firebird Foundation
-
Become a paying member or sponsor of the Firebird Foundation
2. SQL Language Structure
This reference describes the SQL language supported by Firebird.
2.1. Background to Firebird’s SQL Language
To begin, a few points about some characteristics of Firebird’s SQL implementation.
2.1.1. SQL Flavours
Distinct subsets of SQL apply to different areas of activity. The SQL subsets in Firebird’s language implementation are:
DSQL |
Dynamic SQL |
PSQL |
Procedural SQL |
ESQL |
Embedded SQL |
ISQL |
Interactive SQL |
Dynamic SQL is the major part of the language which corresponds to Part 2 (SQL/Foundation) of the SQL specification. DSQL represents statements passed by client applications through the public Firebird API and processed by the database engine.
Procedural SQL augments Dynamic SQL to allow compound statements containing local variables, assignments, conditions, loops and other procedural constructs.
PSQL corresponds to Part 4 (SQL/PSM) of the SQL specifications.
PSQL extensions are available in persistent stored modules (procedures, functions and triggers), and in Dynamic SQL as well (see EXECUTE BLOCK
).
Embedded SQL is the SQL subset supported by Firebird gpre, the application which allows you to embed SQL constructs into your host programming language (C, C++, Pascal, Cobol, etc.) and preprocess those embedded constructs into the proper Firebird API calls.
Only a subset of the statements and expressions implemented in DSQL are supported in ESQL.
Interactive ISQL refers to the language that can be executed using Firebird isql, the command-line application for accessing databases interactively. As a regular client application, its native language is DSQL. It also offers a few additional commands that are not available outside its specific environment.
Both DSQL and PSQL subsets are completely presented in this reference. Neither ESQL nor ISQL flavours are described here unless mentioned explicitly.
For ISQL, consult the manual Firebird Interactive SQL Utility.
2.1.2. SQL Dialects
SQL dialect is a term that defines the specific features of the SQL language that are available when accessing a database. SQL dialects can be defined at the database level and specified at the connection level. Three dialects are available:
-
Dialect 1 is intended solely to allow backward compatibility with legacy databases from old InterBase versions, version 5 and below. A “Dialect 1” database retains certain language features that differ from Dialect 3, the default for Firebird databases.
-
Date and time information are stored in a
DATE
data type. ATIMESTAMP
data type is also available, that is identical to thisDATE
implementation. -
Double quotes may be used as an alternative to apostrophes for delimiting string data. This is contrary to the SQL standard — double quotes are reserved for a distinct syntactic purpose both in standard SQL and in Dialect 3. Double-quoting strings is therefore to be avoided.
-
The precision for
NUMERIC
andDECIMAL
data types is smaller than in Dialect 3 and, if the precision of a fixed decimal number is greater than 9, Firebird stores it internally as a double-precision floating point value. -
The
BIGINT
(64-bit integer) data type is not supported. -
Identifiers are case-insensitive and must always comply with the rules for regular identifiers — see the section Identifiers below.
-
Although generator values are stored as 64-bit integers, a Dialect 1 client request,
SELECT GEN_ID (MyGen, 1)
, for example, will return the generator value truncated to 32 bits.
-
-
Dialect 2 is available only on a Firebird client connection and cannot be set in a database. It is intended to assist debugging of possible problems with legacy data when migrating a database from dialect 1 to 3.
-
In Dialect 3 databases,
-
numbers (
DECIMAL
andNUMERIC
data types) are stored as fixed-point values (scaled integers) for all precisions; depending on the type and precision, they are stored as aSMALLINT
,INTEGER
,BIGINT
orINT128
. -
The
TIME
data type is available for storing time-of-day data. -
The
DATE
data type stores only date information. -
The
TIMESTAMP
data type stores date and time information. -
The 64-bit integer data type
BIGINT
is available. -
Double quotes are reserved for delimiting non-regular identifiers, enabling object names that are case-sensitive or that do not meet the requirements for regular identifiers in other ways.
-
Strings must be delimited with single quotes (apostrophes) or using Q-strings.
-
Generator values are stored as 64-bit integers.
-
Use of Dialect 3 is strongly recommended for newly developed databases and applications. Both database and connection dialects should match, except under migration conditions with Dialect 2. This reference describes the semantics of SQL Dialect 3 unless specified otherwise. |
2.2. Basic Elements: Statements, Clauses, Keywords
The primary construct in SQL is the statement. A statement defines what the database management system should do with a particular data or metadata object. More complex statements contain simpler constructs — clauses and options.
- Clauses
-
A clause defines a certain type of directive in a statement. For instance, the
WHERE
clause in aSELECT
statement and in other data manipulation statements (e.g.UPDATE
,DELETE
) specifies criteria for searching one or more tables for the rows that are to be selected, updated or deleted. TheORDER BY
clause specifies how the output data — result set — should be sorted. - Options
-
Options, being the simplest constructs, are specified in association with specific keywords to provide qualification for clause elements. Where alternative options are available, it is usual for one of them to be the default, used if nothing is specified for that option. For instance, the
SELECT
statement will return all rows that match the search criteria unless theDISTINCT
option restricts the output to non-duplicated rows. - Keywords
-
All words that are included in the SQL lexicon are keywords. Some keywords are reserved, meaning their usage as identifiers for database objects, parameter names or variables is prohibited in some or all contexts. Non-reserved keywords can be used as identifiers, although this is not recommended. From time to time, non-reserved keywords may become reserved, or new (reserved or non-reserved) keywords may be added when new language feature are introduced. Although unlikely, reserved words may also change to non-reserved keywords, or keywords may be removed entirely, for example when parser rules change.
For example, the following statement will be executed without errors because, although
ABS
is a keyword, it is not a reserved word.CREATE TABLE T (ABS INT NOT NULL);
On the contrary, the following statement will return an error because
ADD
is both a keyword and a reserved word.CREATE TABLE T (ADD INT NOT NULL);
Refer to the list of reserved words and keywords in the chapter Reserved Words and Keywords.
2.3. Identifiers
All database objects have names, often called identifiers. The maximum identifier length is 63 characters character set UTF8 (252 bytes).
It is possible to restrict the actual maximum identifier length through configuration. Consult the Firebird 4.0 Release Notes for details. In this language reference we assume the default configuration of 63 characters (252 bytes). |
Two types of names are valid as identifiers: regular identifiers, similar to variable names in regular programming languages, and delimited identifiers that are specific to SQL. To be valid, each type of identifier must conform to a set of rules, as follows:
2.3.1. Rules for Regular Identifiers
-
Length cannot exceed 63 characters
-
The name must start with an unaccented, 7-bit ASCII alphabetic character. It may be followed by other 7-bit ASCII letters, digits, underscores or dollar signs. No other characters, including spaces, are valid. The name is case-insensitive, meaning it can be declared and used in either upper or lower case. Thus, from the system’s point of view, the following names are the same:
fullname FULLNAME FuLlNaMe FullName
-
Regular identifiers are stored in uppercase
<name> ::= <letter> | <name><letter> | <name><digit> | <name>_ | <name>$ <letter> ::= <upper letter> | <lower letter> <upper letter> ::= A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z <lower letter> ::= a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q | r | s | t | u | v | w | x | y | z <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
2.3.2. Rules for Delimited Identifiers
-
Length cannot exceed 63 characters in character set UTF8 (252 bytes). Identifiers are stored in character set
UTF8
, which means characters outside the ASCII range are stored using 2 to 4 bytes. -
The entire string must be enclosed in double-quotes, e.g.
"anIdentifier"
-
A double-quote in an identifier can be escaped by doubling it, e.g.
"with""doublequote"
; we recommend avoiding double-quotes in delimited identifiers. -
It may contain any character from the
UTF8
character set, including accented characters, spaces and special characters -
An identifier can be a reserved word
-
Delimited identifiers are stored as-is and are case-sensitive in all contexts
-
Trailing spaces in delimited identifiers are removed, as with any string constant
-
Delimited identifiers are available in Dialect 3 only. For more details on dialects, see SQL Dialects
<delimited identifier> ::= "<permitted character>[<permitted character> ...]"
A delimited identifier such as "FULLNAME"
is the same as the regular identifiers FULLNAME
, fullname
, FullName
, and so on.
The reason is that Firebird stores regular identifiers in upper case, regardless of how they were defined or declared.
Delimited identifiers are always stored according to the exact case of their definition or declaration.
Thus, "FullName"
(quoted, or delimited) is different from FullName
(unquoted, or regular) which is stored as FULLNAME
in the metadata.
2.4. Literals
Literals are used to directly represent values in a statement. Examples of standard types of literals are:
integer - 0, -34, 45, 0X080000000;
fixed-point - 0.0, -3.14
floating-point - 3.23e-23;
string - 'text', 'don''t!', Q'{don't!}';
binary string - x'48656C6C6F20776F726C64'
date - DATE '2018-01-19';
time - TIME '15:12:56';
timestamp - TIMESTAMP '2018-01-19 13:32:02';
boolean - true, false, unknown
null state - null
Details about literals for each data type are discussed in section Literals (Constants) of chapter Common Language Elements.
2.5. Operators and Special Characters
A set of special characters is reserved for use as operators or separators.
<special char> ::=
any of <space> or "%&'()*+,-./:;<=>?|^{}
Some of these characters, alone or in combination, may be used as operators (arithmetical, string, logical), as SQL command separators, to quote identifiers, or to mark the limits of string literals or comments.
<operator> ::= <string concatenation operator> | <arithmetic operator> | <comparison operator> | <logical operator> <string concatentation operator> ::= '||' <arithmetic operator> ::= * | / | + | - <comparison operator> ::= = | <> | != | ~= | ^= | > | < | >= | <= | !> | ~> | ^> | !< | ~< | ^< <logical operator> ::= NOT | AND | OR
For more details on operators, see Expressions.
2.6. Comments
Comments may be present in SQL scripts, SQL statements and PSQL modules. A comment can be any text, usually used to document how particular parts of the code work. The parser ignores the text of comments.
Firebird supports two types of comments: block (or bracketed) and in-line (or simple).
<comment> ::= <block comment> | <single-line comment> <block comment> ::= /* <character>[<character> ...] */ <single-line comment> ::= -- <character>[<character> ...]<end line>
Block comments start with the /*
character pair and end with the */
character pair.
Text in block comments may be of any length and can occupy multiple lines.
In-line comments start with a pair of hyphen characters, --
and continue until the first linebreak (end of line).
CREATE PROCEDURE P(APARAM INT)
RETURNS (B INT)
AS
BEGIN
/* This text will be ignored during the execution of the statement
since it is a comment
*/
B = A + 1; -- In-line comment
SUSPEND;
END
3. Data Types and Subtypes
Data of various types are used to:
-
Define columns in a database table in the
CREATE TABLE
statement or change columns usingALTER TABLE
-
Declare or change a domain using the
CREATE DOMAIN
orALTER DOMAIN
statements -
Declare local variables, return values and parameters in PSQL modules and UDFs — user-defined functions
-
Provide arguments for the
CAST()
function when explicitly converting data from one type to another
Name | Size | Precision & Limits | Description |
---|---|---|---|
|
64 bits |
From -263 to (263 - 1) |
Signed 64-bit integer. This data type is available in Dialect 3 only |
|
n bytes |
from 1 to 32,767 bytes |
A fixed-length binary data type;
synonym for |
|
Varying |
The size of a |
A data type of variable size for storing large amounts of data, such as images, text, digital sounds.
The blob subtype defines its content.
Depending on the page size, |
|
1 byte |
false, true, unknown |
Boolean data type |
|
n characters. Size in bytes depends on the encoding, the number of bytes in a character |
from 1 to 32,767 bytes |
A fixed-length character data type. Values shorter than the declared length are padded with spaces (0x20) — or NUL (0x00) for character set OCTETS — up to the declared length. If the number of characters is not specified, 1 is used by default. |
|
4 bytes |
From 0001-01-01 AD to 9999-12-31 AD |
Date only, no time element |
|
64 or 128 bits |
dec_prec = 16 or 34, defines the number of decimal digits |
Decimal floating-point type, IEEE-754 decimal64 or decimal128. If the precision is not specified, 34 is used by default. |
|
Varying (16, 32, 64 or 128 bits) |
precision = from 1 to 38, defines the minimum number of digits to store; scale = from 0 to 38, defines the number of digits after the decimal point |
A number with a decimal point that has scale digits after the point.
scale must be less than or equal to precision.
Example: |
|
64 bits |
2.225 * 10-308 to 1.797 * 10308 |
Double-precision, IEEE-754 binary64, ~15 digits, reliable size depends on the platform |
|
32 bits |
1.175 * 10-38 to 3.402 * 1038 |
Single-precision, IEEE-754 binary32, ~7 digits |
|
32 or 64 bits |
bin_prec = from 1 to 53, binary precision |
Binary precision 1 - 24: synonym for |
|
32 bits |
-2,147,483,648 up to 2,147,483,647 |
Signed 32-bit integer |
|
128 bits |
From -2127 to (2127 - 1) |
Signed 128-bit integer |
|
Varying (16, 32, 64 or 128 bits) |
precision = from 1 to 38, defines the minimum number of digits to store; scale = from 0 to 38, defines the number of digits after the decimal point |
A number with a decimal point that has scale digits after the point.
scale must be less than or equal to precision.
Example: |
|
32 bits |
|
Synonym for |
|
16 bits |
-32,768 to 32,767 |
Signed short (word) |
|
4 bytes |
0:00 to 23:59:59.9999 |
Time of day. It cannot be used to store an interval of time. |
|
6 bytes |
0:00 to 23:59:59.9999 |
Time of day with either a time zone offset or named zone. It cannot be used to store an interval of time. |
|
8 bytes |
From start of day 0001-01-01 AD to end of day 9999-12-31 AD |
Date and time of day |
|
10 bytes |
From start of day 0001-01-01 AD to end of day 9999-12-31 AD |
Date and time of day with either a time zone offset or named zone. |
|
n bytes |
from 1 to 32,765 bytes |
Variable length string type;
synonym for |
|
n characters. Size in bytes depends on the encoding, the number of bytes in a character |
from 1 to 32,765 bytes |
Variable length string type. The total size of characters in bytes cannot be larger than (32KB-3), taking into account their encoding. The two leading bytes store the declared length. There is no default size: the n argument is mandatory. Leading and trailing spaces are stored, and they are not trimmed, except for those trailing characters that are past the declared length. |
3.1. Integer Data Types
The SMALLINT
, INTEGER
, BIGINT
and INT128
data types are used for integers of various precision in Dialect 3.
Firebird does not support an unsigned integer data type.
3.1.1. SMALLINT
The 16-bit SMALLINT
data type is for compact data storage of integer data for which only a narrow range of possible values is required.
Numbers of the SMALLINT
type are within the range from -216 to 216 - 1, that is, from -32,768 to 32,767.
SMALLINT
ExamplesCREATE DOMAIN DFLAG AS SMALLINT DEFAULT 0 NOT NULL
CHECK (VALUE=-1 OR VALUE=0 OR VALUE=1);
CREATE DOMAIN RGB_VALUE AS SMALLINT;
3.1.2. INTEGER
The INTEGER
— or INT
— data type is a 32-bit integer.
Numbers of the INTEGER
type are within the range from -232 to 232 - 1, that is, from -2,147,483,648 to 2,147,483,647.
INTEGER
ExampleCREATE TABLE CUSTOMER (
CUST_NO INTEGER NOT NULL,
CUSTOMER VARCHAR(25) NOT NULL,
CONTACT_FIRST VARCHAR(15),
CONTACT_LAST VARCHAR(20),
...
PRIMARY KEY (CUST_NO) )
3.1.3. BIGINT
BIGINT
is a 64-bit integer data type, available only in Dialect 3.
Numbers of the BIGINT
type are within the range from -263 to 263 - 1, or from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
3.1.4. INT128
INT128
is a 128-bit integer data type.
This type is not defined in the SQL standard.
Numbers of the INT128
type are within the range from -2127 to 2127 - 1.
3.1.5. Hexadecimal Format for Integer Numbers
Constants of integer types can be specified in a hexadecimal format by means of 1 to 8 digits for INTEGER
, 9 to 16 hexadecimal digits for BIGINT
, and 10 to 32 hexadecimal digits for INT128
.
Hex representation for writing to SMALLINT
is not explicitly supported, but Firebird will transparently convert a hex number to SMALLINT
if necessary, provided it falls within the ranges of negative and positive SMALLINT
.
The usage and numerical value ranges of hexadecimal notation are described in more detail in the discussion of number constants in the chapter entitled Common Language Elements.
CREATE TABLE WHOLELOTTARECORDS (
ID BIGINT NOT NULL PRIMARY KEY,
DESCRIPTION VARCHAR(32)
);
INSERT INTO MYBIGINTS VALUES (
-236453287458723,
328832607832,
22,
-56786237632476,
0X6F55A09D42, -- 478177959234
0X7FFFFFFFFFFFFFFF, -- 9223372036854775807
0XFFFFFFFFFFFFFFFF, -- -1
0X80000000, -- -2147483648, an INTEGER
0X080000000, -- 2147483648, a BIGINT
0XFFFFFFFF, -- -1, an INTEGER
0X0FFFFFFFF -- 4294967295, a BIGINT
);
The hexadecimal INTEGER
s in the above example are automatically cast to BIGINT
before being inserted into the table.
However, this happens after the numerical value is determined, so 0x80000000
(8 digits) and 0x080000000
(9 digits) will be stored as different BIGINT
values.
3.2. Floating-Point Data Types
Firebird supports two types of floating-point data types: approximate or binary floating-point data types (FLOAT
and DOUBLE PRECISION
), and decimal floating-point types (DECFLOAT
).
3.2.1. Approximate Floating-Point Data Types
Approximate floating-point values are stored in an IEEE 754 binary format that comprises sign, exponent and mantissa.
Precision is dynamic, corresponding to the physical storage format of the value, which is exactly 4 bytes for the FLOAT
type and 8 bytes for DOUBLE PRECISION
.
Considering the peculiarities of storing floating-point numbers in a database, these data types are not recommended for storing monetary data. For the same reasons, columns with floating-point data are not recommended for use as keys or to have uniqueness constraints applied to them.
For testing data in columns with floating-point data types, expressions should check using a range, for instance, BETWEEN
, rather than searching for exact matches.
When using these data types in expressions, extreme care is advised regarding the rounding of evaluation results.
FLOAT
FLOAT [(bin_prec)]
Parameter | Description |
---|---|
bin_prec |
Precision in binary digits, default is 24 1 - 24: 32-bit single precision 25 - 53: 64-bit double precision |
The FLOAT
data type defaults to a 32-bit single precision floating-point type with an approximate precision of 7 decimal digits after the decimal point (24 binary digits).
To ensure the safety of storage, rely on 6 decimal digits of precision.
The syntax FLOAT(bin_prec)
behaves as follows:
-
1 <= _bin_prec <= 23: 32-bit single precision (synonym for
FLOAT
) -
25 <= _bin_prec <= 53: 64-bit double precision (synonym for
DOUBLE PRECISION
)
The behaviour of FLOAT
(without explicit precision) behaves as the SQL standard type REAL
.
-
Firebird 3.0 and earlier supported
FLOAT(dec_prec)
where dec_prec was the approximate precision in decimal digits, with 0 <= dec_prec <= 7 mapped to 32-bit single precision and P > 7 mapped to 64-bit double precision. This syntax was never documented. -
For bin_prec in
FLOAT(bin_prec)
, the values 1 <= bin_prec <= 24 are all treated as bin_prec = 24, values 25 <= bin_prec <= 53 are all handled as bin_prec = 53. -
Most Firebird tools will report
FLOAT(1)
—FLOAT(24)
asFLOAT
, andFLOAT(25)
—FLOAT(53)
asDOUBLE PRECISION
.
REAL
REAL
The data type REAL
is a synonym for FLOAT
, and is provided for syntax compatibility.
When used to define a column or parameter, it’s indistinguishable from using FLOAT
or FLOAT(1)
— FLOAT(24)
.
-
REAL
has been available as a synonym forFLOAT
since Firebird 1.0 and even earlier, but was never documented. -
Most Firebird tools will report
FLOAT
instead ofREAL
.
DOUBLE PRECISION
DOUBLE PRECISION
The DOUBLE PRECISION
data type is stored with an approximate precision of 15 digits.
-
Firebird also has the — previously undocumented — synonyms for
DOUBLE PRECISION
:LONG FLOAT
andLONG FLOAT(bin_prec)
, with 1 <= bin_prec <= 53.These non-standard type names are deprecated and may be removed in a future Firebird version.
-
Firebird 3.0 and earlier supported
LONG FLOAT(dec_prec)
where dec_prec was the approximate precision in decimal digits, where any value for dec_prec mapped to 64-bit double precision.
3.2.2. Decimal Floating-Point Types
Decimal floating-point values are stored in an IEEE 754 decimal format that comprises sign, exponent and coefficient. Contrary to the approximate floating-point data types, precision is either 16 or 34 decimal digits.
DECFLOAT
DECFLOAT [(dec_prec)]
Parameter | Description |
---|---|
dec_prec |
Precision in decimal digits, either 16 or 34. Default is 34. |
DECFLOAT
is a SQL standard-compliant numeric type that stores floating-point number precisely (decimal floating-point type), unlike FLOAT
or DOUBLE PRECISION
that provide a binary approximation of the purported precision.
The type is stored and transmitted as IEEE 754 standard types Decimal64
(DECFLOAT(16)
) or Decimal128
(DECFLOAT(34)
).
All intermediate calculations are performed with 34-digit values.
Type | Maximum precision | Minimum Exponent | Maximum Exponent | Smallest value | Largest value |
---|---|---|---|---|---|
|
16 |
-383 |
+384 |
1E-398 |
9.9..9E+384 |
|
34 |
-6143 |
+6144 |
1E-6176 |
9.9..9E+6144 |
Observe that although the smallest exponent for DECFLOAT(16)
is -383, the smallest value has an exponent of -398, but 15 fewer digits.
And similar for DECFLOAT(34)
, smallest exponent is -6143, but the smallest value has an exponent of -6176, but 33 fewer digits.
The reason is that precision was “sacrificed” to be able to store a smaller value.
This is a result of how the value is stored: as a decimal value of 16 or 34 digits and an exponent.
For example, 1.234567890123456e-383
is stored as coefficient 1234567890123456
and exponent -398
, while 1E-398
is stored as coefficient 1
, exponent -398
.
Behaviour of DECFLOAT
Operations
The behaviour of DECFLOAT
operations in a session, specifically rounding and error behaviour, can be configured using the SET DECFLOAT
management statement, and the isc_dpb_decfloat_round
and isc_dpb_decfloat_traps
DPB items.
Length of DECFLOAT
Literals
It is possible to express DECFLOAT(34)
values in approximate numeric literals, but only for values with a mantissa of 20 or more digits, or an absolute exponent larger than 308.
Scientific notation literals with fewer digits or a smaller absolute exponent are DOUBLE PRECISION
literals.
Exact numeric literals with 40 or more digits — actually 39 digits, when larger than the maximum INT128
value — are also handled as DECFLOAT(34)
.
Alternatively, use a string literal and explicitly cast to the desired DECFLOAT
type.
The length of DECFLOAT
literals cannot exceed 1024 characters.
Scientific notation is required for greater values.
For example, 0.0<1020 zeroes>11
cannot be used as a literal, but the equivalent in scientific notation, 1.1E-1022
, is valid.
Similarly, 10<1022 zeroes>0
can be presented as 1.0E1024
.
Literals with more than 34 significant digits are rounded using the DECFLOAT
rounding mode of the session.
DECFLOAT
and Functions
Use with Standard Functions
A number of standard scalar functions can be used with expressions and values of the DECFLOAT
type.
They are:
The aggregate functions SUM
, AVG
, MAX
and MIN
work with DECFLOAT
data, as do all the statistical aggregates (including but not limited to STDDEV
or CORR
).
Special Functions for DECFLOAT
Firebird supports four functions, designed to support DECFLOAT
data specifically:
COMPARE_DECFLOAT
-
compares two
DECFLOAT
values to be equal, different or unordered NORMALIZE_DECFLOAT
-
takes a single
DECFLOAT
argument and returns it in its simplest form QUANTIZE
-
takes two
DECFLOAT
arguments and returns the first argument scaled using the second value as a pattern TOTALORDER
-
performs an exact comparison on two
DECFLOAT
values
Detailed descriptions are available in the Special Functions for DECFLOAT
section of the Built-in Scalar Functions chapter.
3.3. Fixed-Point Data Types
Fixed-point data types ensure the predictability of multiplication and division operations, making them the choice for storing monetary values.
Firebird implements two fixed-point data types: NUMERIC
and DECIMAL
.
According to the SQL standard, both types limit the stored number to the declared scale (the number of digits after the decimal point).
The standard defines different treatment of the precision for each type: precision for NUMERIC
columns is exactly “as declared”, while DECIMAL
columns accepts numbers whose precision is at least equal to what was declared.
The behaviour of both |
For instance, NUMERIC(4, 2)
defines a number consisting altogether of four digits, including two digits after the decimal point;
that is, it can have up to two digits before the point[1]] and no more than two digits after the point.
If the number 3.1415 is written to a column with this data type definition, the value of 3.14 will be saved in the NUMERIC(4, 2)
column.
The form of declaration for fixed-point data, for instance, NUMERIC(p, s)
, is common to both types.
The s
argument in this template is scale.
Understanding the mechanism for storing and retrieving fixed-point data should help to visualise why: for storage, the number is multiplied by 10s (10 to the power of s
), converting it to an integer;
when read, the integer is converted back by multiplying by 10-s (or, dividing by 10s).
The method of storing fixed-point data in the database depends on several factors: declared precision, database dialect, declaration type.
Precision | Data type | Dialect 1 | Dialect 3 |
---|---|---|---|
1 - 4 |
|
|
|
1 - 4 |
|
|
|
5 - 9 |
|
|
|
10 - 18 |
|
|
|
19 - 38 |
|
|
|
Numerics with precision less than 19 digits use SMALLINT
, INTEGER
, BIGINT
or DOUBLE PRECISION
as the base data type, depending on the number of digits and SQL dialect.
When precision is between 19 and 38 digits an INT128
is used as the base data type, and the actual precision is always extended to the full 38 digits.
For complex calculations, those digits are cast internally to DECFLOAT(34).
The result of various mathematical operations, such as LOG()
, EXP()
and so on, and aggregate functions using a high precision numeric argument, will be DECFLOAT(34)
.
3.3.1. NUMERIC
NUMERIC [(precision [, scale])]
Parameter | Description |
---|---|
precision |
Precision, between 1 and 38. Defaults to 9. |
scale |
Scale, between 0 and precision. Defaults to 0. |
Further to the explanation above, Firebird will store NUMERIC
data according the declared precision and scale.
Some more examples are:
NUMERIC(4) stored as SMALLINT (exact data) NUMERIC(4,2) SMALLINT (data * 102) NUMERIC(10,4) (Dialect 1) DOUBLE PRECISION (Dialect 3) BIGINT (data * 104) NUMERIC(38,6) INT128 (data * 106)
Always keep in mind that the storage format depends on the precision.
For instance, you define the column type as |
3.3.2. DECIMAL
{ DECIMAL | DEC } [(precision [, scale])]
Parameter | Description |
---|---|
precision |
Precision, between 1 and 38. Defaults to 9. |
scale |
Scale, between 0 and precision. Defaults to 0. |
The storage format in the database for DECIMAL
is similar to NUMERIC
, with some differences that are easier to observe with the help of some more examples:
DECIMAL(4) stored as INTEGER (exact data) DECIMAL(4,2) INTEGER (data * 102) DECIMAL(10,4) (Dialect 1) DOUBLE PRECISION (Dialect 3) BIGINT (data * 104) DECIMAL(38,6) INT128 (data * 106)
3.4. Data Types for Dates and Times
The DATE
, TIME
and TIMESTAMP
data types are used to work with data containing dates and times.
Note About Dates
Bear in mind that a time series consisting of dates in past centuries is processed without taking into account the actual historical facts, as though the Gregorian calendar were applicable throughout the entire series. |
Time zone support is available using the types TIME WITH TIME ZONE
and TIMESTAMP WITH TIME ZONE
.
In this language reference, we’ll use TIME
and TIMESTAMP
to refer both to the specific types without time zone — TIME [WITHOUT TIME ZONE]
and TIMESTAMP [WITHOUT TIME ZONE]
— and aspects of both the without time zone and with time zone types, which one we mean is usually clear from the context.
The data types |
Dialect 3 supports all the five types, while Dialect 1 has only DATE
.
The DATE
type in Dialect 3 is “date-only”, whereas the Dialect 1 DATE
type stores both date and time-of-day, equivalent to TIMESTAMP
in Dialect 3.
Dialect 1 has no “date-only” type.
Dialect 1 |
If fractions of seconds are stored in date and time data types, Firebird stores them to ten-thousandths of a second (100 microseconds or deci-milliseconds). If a lower granularity is preferred, the fraction can be specified explicitly as thousandths, hundredths or tenths of a second, or second, in Dialect 3 databases of ODS 11 or higher.
3.4.1. DATE
DATE
The DATE
data type in Dialect 3 stores only date without time.
The available range for storing data is from January 01, 1 to December 31, 9999.
In Dialect 1, DATE
is an alias for TIMESTAMP
.
Dialect 1 has no “date-only” type.
In Dialect 1, date literals without a time part, as well as casts of date mnemonics If you need to store a Dialect 1 timestamp literal with an explicit zero time-part, the engine will accept a literal like |
3.4.2. TIME
TIME [{ WITHOUT | WITH } TIME ZONE]
For a bare TIME
, WITHOUT TIME ZONE
is assumed.
The TIME
data type is available in Dialect 3 only.
It stores the time of day within the range from 00:00:00.0000 to 23:59:59.9999.
If you need to get the time-part from DATE
in Dialect 1, you can use the EXTRACT
function.
EXTRACT()
EXTRACT (HOUR FROM DATE_FIELD)
EXTRACT (MINUTE FROM DATE_FIELD)
EXTRACT (SECOND FROM DATE_FIELD)
See also the EXTRACT()
function in the chapter entitled Built-in Scalar Functions.
TIME [WITHOUT TIME ZONE]
The TIME
(or synonym TIME WITHOUT TIME ZONE
) represents a time without time zone information.
TIME WITH TIME ZONE
The TIME WITH TIME ZONE
represents a time with time zone information (either an offset or a named zone).
Firebird uses the ICU implementation of the IANA Time Zone Database for named zones.
EXTRACT()
EXTRACT (TIMEZONE_HOUR FROM TIME_TZ_FIELD)
EXTRACT (TIMEZONE_MINUTE FROM TIME_TZ_FIELD)
3.4.3. TIMESTAMP
TIMESTAMP [{ WITHOUT | WITH } TIME ZONE]
For a bare TIMESTAMP
, WITHOUT TIME ZONE
is assumed.
The TIMESTAMP
data type is available in Dialect 3 and Dialect 1.
It comprises two 32-bit integers — a date-part and a time-part — to form a structure that stores both date and time-of-day.
In Dialect 1, DATE
is an alias for TIMESTAMP
.
The EXTRACT
function works equally well with TIMESTAMP
as with the Dialect 1 DATE
type.
3.4.4. Session Time Zone
As the name implies, the session time zone can be different for each database attachment.
It can be set explicitly in the DPB with the item isc_dpb_session_time_zone
;
otherwise, by default, it uses the same time zone as the operating system of the Firebird server process.
This default can be overridden in firebird.conf
, setting DefaultTimeZone
.
Drivers may apply different defaults, for example specifying the client time zone as the default session time zone. Check your driver documentation for details. |
Subsequently, the time zone can be changed to a given time zone using a SET TIME ZONE
statement or reset to its original value with SET TIME ZONE LOCAL
or ALTER SESSION RESET
.
3.4.5. Time Zone Format
A time zone is specified as a string, either a time zone region (for example, 'America/Sao_Paulo'
) or a displacement from GMT in hours:minutes (for example, '-03:00'
).
Supported time zone region names can be found in the system table RDB$TIME_ZONES
.
A time/timestamp with time zone is considered equal to another time/timestamp with time zone if their conversions to UTC are equivalent.
For example, time '10:00 -02:00'
and time '09:00 -03:00'
are equivalent, since both are the same as time '12:00 GMT'
.
The same equivalence applies in |
3.4.6. Operations Using Date and Time Values
The method of storing date and time values makes it possible to involve them as operands in some arithmetic operations. In storage, a date value or date-part of a timestamp is represented as the number of days elapsed since “date zero” — November 17, 1858 — whilst a time value or the time-part of a timestamp is represented as the number of deci-milliseconds (100 microseconds) since midnight.
An example is to subtract an earlier date, time or timestamp from a later one, resulting in an interval of time, in days and fractions of days.
Operand 1 | Operation | Operand 2 | Result |
---|---|---|---|
|
|
|
|
|
|
|
|
|
|
Numeric value |
|
|
|
|
|
|
|
|
|
|
|
Numeric value |
|
|
|
Numeric value |
|
|
|
Numeric value |
|
|
|
Numeric value |
|
|
|
|
Number of days elapsed, within the range |
|
|
Numeric value |
|
|
|
|
Number of seconds elapsed, within the range |
|
|
|
The without time zone value is converted to WITH TIME ZONE in the current session time zone.
Number of seconds elapsed between the UTC values, within the range |
|
|
|
Number of seconds elapsed between the UTC values, within the range |
|
|
Numeric value |
|
|
|
|
Number of days and part-day, within the range |
|
|
|
The without time zone value is converted to WITH TIME ZONE in the current session time zone.
Number of days and part-day between UTC values, within the range |
|
|
|
Number of days and part-day between UTC values, within the range |
|
|
Numeric value |
|
In Dialect 1, the |
3.4.7. Supplemental Time Zone Features
Firebird provides a number of features to discover time zone information.
Virtual table RDB$TIME_ZONES
A virtual table listing time zones supported in the engine.
See also RDB$TIME_ZONES
in System Tables.
Package RDB$TIME_ZONE_UTIL
A package of time zone utility functions and procedures.
See also RDB$TIME_ZONE_UTIL
in System Packages.
Updating the Time Zone Database
Time zones are often changed: of course, when it happens, it is desirable to update the time zone database as soon as possible.
Firebird stores WITH TIME ZONE
values translated to UTC time.
Suppose a value is created with one time zone database, and a later update of that database changes the information in the range of our stored value.
When that value is read, it will be returned as different to the value that was stored initially.
Firebird uses the IANA time zone database through the ICU library. The ICU library included in the Firebird kit (Windows), or installed in a POSIX operating system, can sometimes have an outdated time zone database.
An updated database can be found on this page on the FirebirdSQL GitHub.
Filename le.zip
stands for little-endian and is the necessary file for most computer architectures (Intel/AMD compatible x86 or x64), while be.zip
stands for big-endian architectures and is necessary mostly for RISC computer architectures.
The content of the zip file must be extracted in the /tzdata
sub-directory of the Firebird installation, overwriting existing *.res
files belonging to the database.
./tzdata
is the default directory where Firebird looks for the time zone database.
It can be overridden with the ICU_TIMEZONE_FILES_DIR
environment variable.
3.5. Character Data Types
For working with character data, Firebird has the fixed-length CHAR
and the variable-length VARCHAR
data types.
The maximum size of text data stored in these data types is 32,767 bytes for CHAR
and 32,765 bytes for VARCHAR
.
The maximum number of characters that will fit within these limits depends on the character set used for the data under consideration.
The collation does not affect this maximum, although it may affect the maximum size of any index that involves the column.
If no character set is explicitly specified when defining a character object, the default character set of the database — at time of defining the object — will be used.
If the database does not have a default character set defined, the object gets the character set NONE
.
3.5.1. Unicode
Most current development tools support Unicode, implemented in Firebird with the character sets UTF8
and UNICODE_FSS
.
UTF8
comes with collations for many languages.
UNICODE_FSS
is more limited and was previously used mainly by Firebird internally for storing metadata.
Keep in mind that one UTF8
character occupies up to 4 bytes, thus limiting the size of CHAR
fields to 8,191 characters (32,767/4).
The actual “bytes per character” value depends on the range the character belongs to.
Non-accented Latin letters occupy 1 byte, Cyrillic letters from the WIN1251
encoding occupy 2 bytes in UTF8
, characters from other encodings may occupy up to 4 bytes.
The UTF8
character set implemented in Firebird supports the latest version of the Unicode standard, thus recommending its use for international databases.
3.5.2. Client Character Set
While working with strings, it is essential to keep the character set of the client connection in mind.
If there is a mismatch between the character sets of the stored data and that of the client connection, the output results for string columns are automatically re-encoded, both when data are sent from the client to the server and when they are sent back from the server to the client.
For example, if the database was created in the WIN1251
encoding but KOI8R
or UTF8
is specified in the client’s connection parameters, the mismatch will be transparent.
3.5.3. Special Character Sets
NONE
The character set NONE
is a special character set in Firebird.
It can be characterized such that each byte is a part of a string, but the string is stored in the system without any clues about what constitutes any character: character encoding, collation, case, etc. are simply unknown.
It is the responsibility of the client application to deal with the data and provide the means to interpret the string of bytes in some way that is meaningful to the application and the human user.
OCTETS
Data in OCTETS
encoding are treated as bytes that may not be interpreted as characters.
OCTETS
provides a way to store binary data.
The database engine has no concept of what it is meant to do with a string of bytes in OCTETS
, other than store and retrieve it.
Again, the client side is responsible for validating the data, presenting them in formats that are meaningful to the application and its users and handling any exceptions arising from decoding and encoding them.
CHAR
and VARCHAR
with character set OCTETS
have synonyms BINARY
and VARBINARY
.
3.5.4. Collation
Each character set has a default collation (COLLATE
) that specifies the collation order (or, collation sequence, or collating sequence).
Usually, it provides nothing more than ordering based on the numeric code of the characters and a basic mapping of upper- and lower-case characters.
If some behaviour is needed for strings that is not provided by the default collation and a suitable alternative collation is supported for that character set, a COLLATE collation
clause can be specified in the column definition.
A COLLATE collation
clause can be applied in other contexts besides the column definition.
For comparison operations, it can be added in the WHERE
clause of a SELECT
statement.
If output needs to be sorted in a special alphabetic sequence, or case-insensitively, and the appropriate collation exists, then a COLLATE
clause can be included with the ORDER BY
clause when rows are being sorted on a character field and with the GROUP BY
clause in case of grouping operations.
Case-Insensitive Searching
For a case-insensitive search, the UPPER
function could be used to convert both the search argument and the searched strings to upper-case before attempting a match:
...
where upper(name) = upper(:flt_name)
For strings in a character set that has a case-insensitive collation available, you can apply the collation, to compare the search argument and the searched strings directly.
For example, using the WIN1251
character set, the collation PXW_CYRL
is case-insensitive for this purpose:
...
WHERE FIRST_NAME COLLATE PXW_CYRL >= :FLT_NAME
...
ORDER BY NAME COLLATE PXW_CYRL
UTF8
Collations
The following table shows the possible collations for the UTF8
character set.
Collation | Characteristics |
---|---|
|
Collation works according to the position of the character in the table (binary). |
|
Collation works according to the UCA algorithm (Unicode Collation Algorithm) (alphabetical). |
|
The default, binary collation, identical to |
|
Case-insensitive collation, works without taking character case into account. |
|
Case-insensitive, accent-insensitive collation, works alphabetically without taking character case or accents into account. |
An example of collation for the UTF8 character set without taking into account the case or accentuation of characters (similar to COLLATE PXW_CYRL
in the earlier example).
...
ORDER BY NAME COLLATE UNICODE_CI_AI
3.5.5. Character Indexes
The maximum length for an index key equals one quarter of the page size, i.e. from 1,024 — for page size 4,096 — to 8,192 bytes — for page size 32,768. The maximum length of an indexed string is 9 bytes less than that quarter-page limit.
The table below shows the maximum length of an indexed string (in characters), according to page size and character set, calculated using this formula.
Page Size |
Bytes per character |
||||
---|---|---|---|---|---|
1 |
2 |
3 |
4 |
6 |
|
4,096 |
1,015 |
507 |
338 |
253 |
169 |
8,192 |
2,039 |
1,019 |
679 |
509 |
339 |
16,384 |
4,087 |
2,043 |
1,362 |
1,021 |
681 |
32,768 |
8,183 |
4,091 |
2,727 |
2,045 |
1,363 |
With case-insensitive collations (“_CI”), one character in the index key will occupy not 4, but 6 (six) bytes, so the maximum key length for a page of — for example — 4,096 bytes, will be 169 characters.
3.5.6. Character Types in Detail
BINARY
BINARY [(length)]
Parameter | Description |
---|---|
length |
Length in bytes between 1 and 32,767;
defaults to |
BINARY
is a fixed-length binary data type, and is an SQL standard-compliant alias for CHAR(length) CHARACTER SET OCTETS
.
Values shorter than the declared length are padded with NUL (0x00) up to the declared length.
Some tools may report the type as |
CHAR
{CHAR | CHARACTER} [(length)] [CHARACTER SET <set>] [COLLATE <name>]
Parameter | Description |
---|---|
length |
Length in characters, defaults to |
set |
Character set name |
name |
Collation name |
CHAR
is a fixed-length character data type.
Values shorter than the declared length are padded with spaces up to the declared length.
The pad character does not have to be a space (0x20): it depends on the character set.
For example, the pad character for the OCTETS
character set is NUL (0x00).
Fixed-length character data can be used to store codes whose length is standard and has a definite “width”. An example of such a code is an EAN13 barcode — 13 characters, all filled.
CHAR CHARACTER SET OCTETS
has the alias BINARY
.
Formally, the COLLATE
clause is not part of the data type declaration, and its position depends on the syntax of the statement.
VARBINARY
{VARBINARY | BINARY VARYING} (length)
Parameter | Description |
---|---|
length |
Length in bytes between 1 and 32,765 |
VARBINARY
is a variable-length binary type, and is an SQL standard-compliant alias for VARCHAR(length) CHARACTER SET OCTETS
.
Some tools may report the type as |
VARCHAR
{VARCHAR | {CHAR | CHARACTER} VARYING} (length) [CHARACTER SET <set>] [COLLATE <name>]
Parameter | Description |
---|---|
length |
Length in characters. A valid length is from 1 to the maximum number of characters that can be accommodated within 32,765 bytes. |
set |
Character set name |
name |
Collation name |
VARCHAR
is a variable-length character data type, up to a maximum of 32,765 bytes.
The stored structure is equal to the actual size of the data plus 2 bytes to record the length of the data.
All characters that are sent from the client application to the database are considered meaningful, including leading and trailing spaces.
VARCHAR CHARACTER SET OCTETS
has the alias VARBINARY
.
Formally, the COLLATE
clause is not part of the data type declaration, and its position depends on the syntax of the statement.
NCHAR
{NCHAR | NATIONAL {CHAR | CHARACTER}} [(length)]
NCHAR
is a fixed-length character data type with the ISO8859_1
character set.
In all other respects it is the same as CHAR
.
If no length is specified, it is taken to be 1.
A similar data type is available for the variable-length string type: NATIONAL {CHAR | CHARACTER} VARYING
.
3.6. Boolean Data Type
3.6.1. BOOLEAN
BOOLEAN
The SQL-compliant BOOLEAN
data type (8 bits) comprises the distinct truth values TRUE
and FALSE
.
Unless prohibited by a NOT NULL
constraint, the BOOLEAN
data type also supports the truth value UNKNOWN
as the null value.
The specification does not make a distinction between the NULL
value of this data type, and the truth value UNKNOWN
that is the result of an SQL predicate, search condition, or Boolean value expression: they may be used interchangeably to mean the same thing.
As with many programming languages, the SQL BOOLEAN
values can be tested with implicit truth values.
For example, field1 OR field2
and NOT field1
are valid expressions.
The IS
Operator
Predicates can use the operator Boolean IS [NOT]
for matching.
For example, field1 IS FALSE
, or field1 IS NOT TRUE
.
|
BOOLEAN
Examples
-
Inserting and selecting
CREATE TABLE TBOOL (ID INT, BVAL BOOLEAN); COMMIT; INSERT INTO TBOOL VALUES (1, TRUE); INSERT INTO TBOOL VALUES (2, 2 = 4); INSERT INTO TBOOL VALUES (3, NULL = 1); COMMIT; SELECT * FROM TBOOL; ID BVAL ============ ======= 1 <true> 2 <false> 3 <null>
-
Test for
TRUE
valueSELECT * FROM TBOOL WHERE BVAL; ID BVAL ============ ======= 1 <true>
-
Test for
FALSE
valueSELECT * FROM TBOOL WHERE BVAL IS FALSE; ID BVAL ============ ======= 2 <false>
-
Test for
UNKNOWN
valueSELECT * FROM TBOOL WHERE BVAL IS UNKNOWN; ID BVAL ============ ======= 3 <null>
-
Boolean values in
SELECT
listSELECT ID, BVAL, BVAL AND ID < 2 FROM TBOOL; ID BVAL ============ ======= ======= 1 <true> <true> 2 <false> <false> 3 <null> <false>
-
PSQL declaration with start value
DECLARE VARIABLE VAR1 BOOLEAN = TRUE;
-
Valid syntax, but as with a comparison with
NULL
, will never return any record
SELECT * FROM TBOOL WHERE BVAL = UNKNOWN; SELECT * FROM TBOOL WHERE BVAL <> UNKNOWN;
Use of Boolean Against Other Data Types
Although BOOLEAN
is not inherently convertible to any other data type, the strings 'true'
and 'false'
(case-insensitive) will be implicitly cast to BOOLEAN
in value expressions.
For example:
if (true > 'false') then ...
The value 'false'
is converted to BOOLEAN
.
Any attempt to use the Boolean operators AND
, NOT
, OR
and IS
will fail.
NOT 'False'
, for example, is invalid.
A BOOLEAN
can be explicitly converted to and from string with CAST
.
UNKNOWN
is not available for any form of casting.
For ordering and comparison, the value TRUE
is greater than the value FALSE
.
3.7. Binary Data Types
The types |
BLOB
s (Binary Large Objects) are complex structures used to store text and binary data of an undefined length, often very large.
BLOB [SUB_TYPE <subtype>] [SEGMENT SIZE <segment size>] [CHARACTER SET <character set>] [COLLATE <collation name>]
If the SUB_TYPE
and CHARACTER SET
clauses are absent, then subtype BINARY
(or 0
) is used.
If the SUB_TYPE
clause is absent and the CHARACTER SET
clause is present, then subtype TEXT
(or 1
) is used.
BLOB (<segment size>) BLOB (<segment size>, <subtype>) BLOB (, <subtype>)
Formally, the COLLATE
clause is not part of the data type declaration, and its position depends on the syntax of the statement.
3.7.1. BLOB
Subtypes
The optional SUB_TYPE
parameter specifies the nature of data written to the column.
Firebird provides two pre-defined subtypes for storing user data:
- Subtype 0:
BINARY
-
If a subtype is not specified, the specification is assumed to be for untyped data and the default
SUB_TYPE BINARY
(orSUB_TYPE 0
) is applied. This is the subtype to specify when the data are any form of binary file or stream: images, audio, word-processor files, PDFs and so on. - Subtype 1:
TEXT
-
Subtype 1 has an alias,
TEXT
, which can be used in declarations and definitions. For instance,BLOB SUB_TYPE TEXT
(orBLOB SUB_TYPE 1
). It is a specialized subtype used to store plain text data that is too large to fit into a string type. ACHARACTER SET
may be specified, if the field is to store text with a different encoding to that specified for the database. ACOLLATE
clause is also supported.Specifying
CHARACTER SET
without specifying aSUB_TYPE
impliesSUB_TYPE TEXT
.
It is also possible to add custom data subtypes, for which the range of enumeration from -1 to -32,768 is reserved.
Custom subtypes enumerated with positive numbers are not allowed, as the Firebird engine uses the numbers from 2-upward for some internal subtypes in metadata.
Custom subtype aliases can be inserted into the RDB$TYPES
table by users with the system privilege CREATE_USER_TYPES
.
3.7.2. BLOB
Specifics
The maximum size of a BLOB
field depends on the page size of the database, whether the blob value is created as a stream blob or a segmented blob, and if segmented, the actual segment sizes used when populating the blob.
For most built-in functions, the maximum size of a BLOB
field is 4 GB, or data beyond the 4 GB limit is not addressable.
For a page size of 4 KB (4096 bytes) the maximum size is slightly less than 4 GB.
Text BLOBs of any length and any character set — including multi-byte — can be operands for practically any statement or internal functions. The following operators are fully supported:
= |
(assignment) |
=, <>, <, <=, >, >= |
(comparison) |
|
(concatenation) |
|
|
|
As an efficient alternative to concatenation, you can also use BLOB_APPEND()
or the functions and procedures of system package RDB$BLOB_UTIL
.
Partial support:
-
An error occurs with these if the search argument is larger than or equal to 32 KB:
LIKE
, -
Aggregation clauses work not on the contents of the field itself, but on the BLOB ID. Aside from that, there are some quirks:
SELECT DISTINCT
returns several NULL values by mistake if they are present
ORDER BY
—
GROUP BY
concatenates the same strings if they are adjacent to each other, but does not do it if they are remote from each other
BLOB
Storage-
By default, a regular record is created for each BLOB, and it is stored on a data page that is allocated for it. If the entire
BLOB
fits onto this page, it is called a level 0 BLOB. The number of this special record is stored in the table record and occupies 8 bytes. -
If a
BLOB
does not fit onto one data page, its contents are put onto separate pages allocated exclusively to it (blob pages), while the numbers of these pages are stored into theBLOB
record. This is a level 1 BLOB. -
If the array of page numbers containing the
BLOB
data does not fit onto a data page, the array is put on separate blob pages, while the numbers of these pages are put into theBLOB
record. This is a level 2 BLOB. -
Levels higher than 2 are not supported.
3.8. Array Types
Firebird does not offer much in the way of language or tools for working with the contents of arrays, and there are no plans to improve this. This limits the usefulness and accessibility of array types. Therefore, the general advice is: do not use arrays. |
The support of arrays in the Firebird DBMS is a departure from the traditional relational model. Supporting arrays in the DBMS could make it easier to solve some data-processing tasks involving large sets of similar data.
Arrays in Firebird are stored in BLOB
of a specialized type.
Arrays can be one-dimensional and multi-dimensional and of any data type except BLOB
and ARRAY
.
CREATE TABLE SAMPLE_ARR (
ID INTEGER NOT NULL PRIMARY KEY,
ARR_INT INTEGER [4]
);
This example will create a table with a field of the array type consisting of four integers. The subscripts of this array are from 1 to 4.
3.8.1. Specifying Explicit Boundaries for Dimensions
By default, dimensions are 1-based — subscripts are numbered from 1. To specify explicit upper and lower bounds of the subscript values, use the following syntax:
'[' <lower>:<upper> ']'
3.8.2. Adding More Dimensions
A new dimension is added using a comma in the syntax. In this example we create a table with a two-dimensional array, with the lower bound of subscripts in both dimensions starting from zero:
CREATE TABLE SAMPLE_ARR2 (
ID INTEGER NOT NULL PRIMARY KEY,
ARR_INT INTEGER [0:3, 0:3]
);
The database employee.fdb
, found in the ../examples/empbuild
directory of any Firebird distribution package, contains a sample stored procedure showing some simple work with arrays:
3.8.3. PSQL Source for SHOW_LANGS
, a procedure involving an array
CREATE OR ALTER PROCEDURE SHOW_LANGS (
CODE VARCHAR(5),
GRADE SMALLINT,
CTY VARCHAR(15))
RETURNS (LANGUAGES VARCHAR(15))
AS
DECLARE VARIABLE I INTEGER;
BEGIN
I = 1;
WHILE (I <= 5) DO
BEGIN
SELECT LANGUAGE_REQ[:I]
FROM JOB
WHERE (JOB_CODE = :CODE)
AND (JOB_GRADE = :GRADE)
AND (JOB_COUNTRY = :CTY)
AND (LANGUAGE_REQ IS NOT NULL))
INTO :LANGUAGES;
IF (LANGUAGES = '') THEN
/* PRINTS 'NULL' INSTEAD OF BLANKS */
LANGUAGES = 'NULL';
I = I +1;
SUSPEND;
END
END
If the features described are enough for your tasks, you might consider using arrays in your projects. Currently, no improvements are planned to enhance support for arrays in Firebird.
3.9. Special Data Types
“Special” data types …
3.9.1. SQL_NULL
Data Type
The SQL_NULL
type holds no data, but only a state: NULL
or NOT NULL
.
It is not available as a data type for declaring table fields, PSQL variables or parameter descriptions.
This data type exists to support the use of untyped parameters in expressions involving the IS NULL
predicate.
An evaluation problem occurs when optional filters are used to write queries of the following type:
WHERE col1 = :param1 OR :param1 IS NULL
After processing, at the API level, the query will look like this:
WHERE col1 = ? OR ? IS NULL
This is a case where the developer writes an SQL query and considers :param1
as though it were a variable that they can refer to twice.
However, at the API level, the query contains two separate and independent parameters.
The server cannot determine the type of the second parameter since it comes in association with IS NULL
.
The SQL_NULL
data type solves this problem.
Whenever the engine encounters an “? IS NULL
” predicate in a query, it assigns the SQL_NULL
type to the parameter, which will indicate that parameter is only about “nullness” and the data type or the value need not be addressed.
The following example demonstrates its use in practice.
It assumes two named parameters — say, :size
and :colour
— which might, for example, get values from on-screen text fields or drop-down lists.
Each named parameter corresponds with two positional parameters in the query.
SELECT
SH.SIZE, SH.COLOUR, SH.PRICE
FROM SHIRTS SH
WHERE (SH.SIZE = ? OR ? IS NULL)
AND (SH.COLOUR = ? OR ? IS NULL)
Explaining what happens here assumes the reader is familiar with the Firebird API and the passing of parameters in XSQLVAR structures — what happens under the surface will not be of interest to those who are not writing drivers or applications that communicate using the “naked” API.
The application passes the parameterized query to the server in the usual positional ?
-form.
Pairs of “identical” parameters cannot be merged into one, so for the two optional filters in the example, four positional parameters are needed: one for each ?
in our example.
After the call to isc_dsql_describe_bind()
, the SQLTYPE of the second and fourth parameters will be set to SQL_NULL
.
Firebird has no knowledge of their special relation with the first and third parameters: that responsibility lies entirely on the application side.
Once the values for size and colour have been set (or left unset) by the user, and the query is about to be executed, each pair of XSQLVAR
s must be filled as follows:
- User has supplied a value
-
First parameter (value compare): set
*sqldata
to the supplied value and*sqlind
to0
(forNOT NULL
)Second parameter (
NULL
test): setsqldata
tonull
(null pointer, not SQLNULL
) and*sqlind
to0
(forNOT NULL
) - User has left the field blank
-
Both parameters: set
sqldata
tonull
(null pointer, not SQLNULL
) and*sqlind
to-1
(indicatingNULL
)
In other words: The value compare parameter is always set as usual.
The SQL_NULL
parameter is set the same, except that sqldata
remains null
at all times.
3.10. Conversion of Data Types
When composing an expression or specifying an operation, the aim should be to use compatible data types for the operands. When a need arises to use a mixture of data types, it should prompt you to look for a way to convert incompatible operands before subjecting them to the operation. The ability to convert data may well be an issue if you are working with Dialect 1 data.
3.10.1. Explicit Data Type Conversion
The CAST
function enables explicit conversion between many pairs of data types.
CAST (<expression> AS <target_type>) <target_type> ::= <domain_or_non_array_type> | <array_datatype> <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !! <array_datatype> ::= !! See Array Data Types Syntax !!
See also CAST()
in Chapter Built-in Scalar Functions.
Casting to a Domain
When you cast to a domain, any constraints declared for it are taken into account, i.e. NOT NULL
or CHECK
constraints.
If the value does not pass the check, the cast will fail.
If TYPE OF
is additionally specified — casting to its base type — any domain constraints are ignored during the cast.
If TYPE OF
is used with a character type (CHAR
/VARCHAR
), the character set and collation are retained.
Casting to TYPE OF COLUMN
When operands are cast to the type of a column, the specified column may be from a table or a view.
Only the type of the column itself is used. For character types, the cast includes the character set, but not the collation. The constraints and default values of the source column are not applied.
CREATE TABLE TTT (
S VARCHAR (40)
CHARACTER SET UTF8 COLLATE UNICODE_CI_AI
);
COMMIT;
SELECT
CAST ('I have many friends' AS TYPE OF COLUMN TTT.S)
FROM RDB$DATABASE;
Conversions Possible for the CAST
Function
From Data Type | To Data Type |
---|---|
Numeric types |
Numeric types, |
|
|
|
|
|
|
|
|
|
|
To convert string data types to the BOOLEAN
type, the value must be (case-insensitive) 'true'
or 'false'
, or NULL
.
Keep in mind that partial information loss is possible.
For instance, when you cast the |
Datetime Formats
To cast string data types to the DATE
, TIME
or TIMESTAMP
data types, you need the string argument to be one of the predefined datetime mnemonics (see Table 16) or a representation of the date in one of the allowed datetime formats (see Datetime Format Syntax),
Literal |
Description |
---|---|
|
Current date and time |
|
Current date |
|
Current date + 1 (day) |
|
Current date - 1 (day) |
Casting the date mnemonics 'TODAY'
, 'TOMORROW'
or 'YESTERDAY'
to a TIMESTAMP WITH TIME ZONE
will produce a value at 00:00:00 UTC rebased to the session time zone.
For example, cast('TODAY' as timestamp with time zone)
on 2021-05-02 20:00 - 2021-05-03 19:59 New York (or 2021-05-03 00:00 - 2021-05-03 23:59 UTC) with session time zone America/New_York, will produce a value TIMESTAMP '2021-05-02 20:00:00.0000 America/New_York'
, while cast('TODAY' as date)
or CURRENT_DATE
will produce either DATE '2021-05-02'
or DATE '2021-05-03'
depending on the actual date.
select
cast('04.12.2014' as date) as d1, -- DD.MM.YYYY
cast('04 12 2014' as date) as d2, -- MM DD YYYY
cast('4-12-2014' as date) as d3, -- MM-DD-YYYY
cast('04/12/2014' as date) as d4, -- MM/DD/YYYY
cast('04.12.14' as date) as d5, -- DD.MM.YY
-- DD.MM with current year
cast('04.12' as date) as d6,
-- MM/DD with current year
cast('04/12' as date) as d7,
cast('2014/12/04' as date) as d8, -- YYYY/MM/DD
cast('2014 12 04' as date) as d9, -- YYYY MM DD
cast('2014.12.04' as date) as d10, -- YYYY.MM.DD
cast('2014-12-04' as date) as d11, -- YYYY-MM-DD
cast('4 Jan 2014' as date) as d12, -- DD MM YYYY
cast('2014 Jan 4' as date) as dt13, -- YYYY MM DD
cast('Jan 4 2014' as date) as dt14, -- MM DD YYYY
cast('11:37' as time) as t1, -- HH:mm
cast('11:37:12' as time) as t2, -- HH:mm:ss
cast('11:31:12.1234' as time) as t3, -- HH:mm:ss.nnnn
-- DD.MM.YYYY HH:mm
cast('04.12.2014 11:37' as timestamp) as dt1,
-- MM/DD/YYYY HH:mm:ss
cast('04/12/2014 11:37:12' as timestamp) as dt2,
-- DD.MM.YYYY HH:mm:ss.nnnn
cast('04.12.2014 11:31:12.1234' as timestamp) as dt3,
cast('now' as timestamp) as m1,
cast('today' as date) as m2,
cast('yesterday' as date) as m3,
cast('tomorrow' as date) as m4
from rdb$database
Shorthand Casts for Datetime Data Types
Firebird allows the use of a shorthand “C-style” type syntax for casts from string to the types DATE
, TIME
and TIMESTAMP
.
The SQL standard calls these “datetime literals”.
<data_type> 'date_format_string'
See also Datetime Literals.
These literal expressions are evaluated directly during parsing, as though the statement were already prepared for execution.
As this produced unexpected or confusing results when using the datetime mnemonics like To use datetime mnemonics, use the full
|
3.10.2. Implicit Data Type Conversion
Implicit data conversion is not possible in Dialect 3 — the CAST
function is almost always required to avoid data type clashes.
In Dialect 1, in many expressions, one type is implicitly cast to another without the need to use the CAST function. For instance, the following statement in Dialect 1 is valid:
UPDATE ATABLE
SET ADATE = '25.12.2016' + 1
The string literal will be cast to the DATE
type implicitly.
In Dialect 3, this statement will raise error 35544569, “Dynamic SQL Error: expression evaluation not supported, Strings cannot be added or subtracted in dialect 3” — a cast will be needed:
UPDATE ATABLE
SET ADATE = CAST ('25.12.2016' AS DATE) + 1
Or, with a datetime literal:
UPDATE ATABLE
SET ADATE = DATE '25.12.2016' + 1
In Dialect 1, mixing integer data and numeric strings is usually possible because the parser will try to cast the string implicitly. For example,
2 + '1'
will be executed correctly.
In Dialect 3, an expression like this will raise an error, so you will need to write it as a CAST
expression:
2 + CAST('1' AS SMALLINT)
The exception to the rule is during string concatenation.
Implicit Conversion During String Concatenation
When multiple data elements are being concatenated, all non-string data will undergo implicit conversion to string, if possible.
SELECT 30||' days hath September, April, June and November' CONCAT$
FROM RDB$DATABASE;
CONCAT$
------------------------------------------------
30 days hath September, April, June and November
3.11. Custom Data Types — Domains
In Firebird, the concept of a “user-defined data type” is implemented in the form of the domain. Creating a domain does not truly create a new data type, of course. A domain provides the means to encapsulate an existing data type with a set of attributes and make this “capsule” available for reuse across the whole database. If several tables need columns defined with identical or nearly identical attributes, a domain makes sense.
Domain usage is not limited to column definitions for tables and views. Domains can be used to declare input and output parameters and variables in PSQL code.
3.11.1. Domain Attributes
A domain definition has required and optional attributes. The data type is a required attribute. Optional attributes include:
-
a default value
-
to allow or forbid
NULL
-
CHECK
constraints -
character set (for character data types and text BLOB fields)
-
collation (for character data types)
CREATE DOMAIN BOOL3 AS SMALLINT
CHECK (VALUE IS NULL OR VALUE IN (0, 1));
Explicit Data Type Conversion for the description of differences in the data conversion mechanism when domains are specified for the TYPE OF
and TYPE OF COLUMN
modifiers.
3.11.2. Domain Override
While defining a column using a domain, it is possible to override some attributes inherited from the domain. Table 17 summarises the rules for domain override.
Attribute | Override? | Comments |
---|---|---|
Data type |
No |
|
Default value |
Yes |
|
Text character set |
Yes |
It can also be used to restore the default database values for the column |
Text collation |
Yes |
|
|
Yes |
To add new conditions to the check, you can use the corresponding |
|
No |
Often it is better to leave domain nullable in its definition and decide whether to make it |
3.11.3. Creating and Administering Domains
A domain is created with the DDL statement CREATE DOMAIN
.
CREATE DOMAIN name [AS] <type> [DEFAULT {<literal> | NULL | <context_var>}] [NOT NULL] [CHECK (<condition>)] [COLLATE <collation>]
CREATE DOMAIN
in the Data Definition (DDL) Statements chapter.
Altering a Domain
To change the attributes of a domain, use the DDL statement ALTER DOMAIN
.
With this statement you can:
-
rename the domain
-
change the data type
-
drop the current default value
-
set a new default value
-
drop the
NOT NULL
constraint -
set the
NOT NULL
constraint -
drop an existing
CHECK
constraint -
add a new
CHECK
constraint
ALTER DOMAIN name [{TO new_name}] [{SET DEFAULT { <literal> | NULL | <context_var> } | DROP DEFAULT}] [{SET | DROP} NOT NULL ] [{ADD [CONSTRAINT] CHECK (<dom_condition>) | DROP CONSTRAINT}] [{TYPE <datatype>}]
ALTER DOMAIN STORE_GRP SET DEFAULT -1;
When changing a domain, its dependencies must be taken into account: whether there are table columns, any variables, input and/or output parameters with the type of this domain declared in the PSQL code. If you change domains in haste, without carefully checking them, your code may stop working!
When you convert data types in a domain, you must not perform any conversions that may result in data loss.
Also, for example, if you convert |
ALTER DOMAIN
in the Data Definition (DDL) Statements chapter.
Deleting (Dropping) a Domain
The DDL statement DROP DOMAIN
deletes a domain from the database, provided it is not in use by any other database objects.
DROP DOMAIN name
DROP DOMAIN Test_Domain
DROP DOMAIN
in the Data Definition (DDL) Statements chapter.
3.12. Data Type Declaration Syntax
This section documents the syntax of declaring data types.
Data type declaration commonly occurs in DDL statements, but also in CAST
and EXECUTE BLOCK
.
The syntax documented below is referenced from other parts of this language reference.
3.12.1. Scalar Data Types Syntax
The scalar data types are simple data types that hold a single value. For reasons of organisation, the syntax of BLOB types are defined separately in BLOB Data Types Syntax.
<domain_or_non_array_type> ::= <scalar_datatype> | <blob_datatype> | [TYPE OF] domain | TYPE OF COLUMN rel.col <scalar_datatype> ::= SMALLINT | INT[EGER] | BIGINT | INT128 | REAL | FLOAT [(bin_prec)] | DOUBLE PRECISION | DECFLOAT [(dec_prec)] | BOOLEAN | DATE | TIME [{WITHOUT | WITH} TIME ZONE] | TIMESTAMP [{WITHOUT | WITH} TIME ZONE] | {DECIMAL | DEC | NUMERIC} [(precision [, scale])] | {VARCHAR | {CHAR | CHARACTER} VARYING} (length) [CHARACTER SET charset] | {CHAR | CHARACTER} [(length)] [CHARACTER SET charset] | {NCHAR | NATIONAL {CHARACTER | CHAR}} VARYING (length) | {NCHAR | NATIONAL {CHARACTER | CHAR}} [(length)] | BINARY [(length)] | {VARBINARY | BINARY VARYING} (length)
Argument | Description |
---|---|
domain |
Domain (only non-array domains) |
rel |
Name of a table or view |
col |
Name of a column in a table or view (only columns of a non-array type) |
bin_prec |
Binary precision, default is 24. 1 - 24: 32-bit single precision |
dec_prec |
Decimal precision of |
precision |
Numeric precision in decimal digits. From 1 to 38 |
scale |
Scale, or number of decimals. From 0 to 38. It must be less than or equal to precision |
length |
The maximum length of a string, in characters, or — for |
charset |
Character set |
domain_or_non_array_type |
Non-array types that can be used in PSQL code and casts |
Use of Domains in Declarations
A domain name can be specified as the type of a PSQL parameter or local variable. The parameter or variable will inherit all domain attributes. If a default value is specified for the parameter or variable, it overrides the default value specified in the domain definition.
If the TYPE OF
clause is added before the domain name, only the data type of the domain is used: any of the other attributes of the domain — NOT NULL
constraint, CHECK
constraints, default value — are neither checked nor used.
However, if the domain is of a text type, its character set and collation are always used.
Use of Column Type in Declarations
Input and output parameters or local variables can also be declared using the data type of columns in existing tables and views.
The TYPE OF COLUMN
clause is used for that, specifying relationname.columnname as its argument.
When TYPE OF COLUMN
is used, the parameter or variable inherits only the data type and — for string types — the character set and collation.
The constraints and default value of the column are ignored.
3.12.2. BLOB Data Types Syntax
The BLOB data types hold binary, character or custom format data of unspecified size. For more information, see Binary Data Types.
<blob_datatype> ::= BLOB [SUB_TYPE {subtype_num | subtype_name}] [SEGMENT SIZE seglen] [CHARACTER SET charset] | BLOB [(seglen [, subtype_num])] | BLOB [(, subtype_num)]
Argument | Description |
---|---|
charset |
Character set (ignored for subtypes other than |
subtype_num |
|
subtype_name |
|
seglen |
Segment size, cannot be greater than 65,535, defaults to 80 when not specified. See also Segment Size |
If the SUB_TYPE
and CHARACTER SET
clauses are absent, then subtype BINARY
(or 0
) is used.
If the SUB_TYPE
clause is absent and the CHARACTER SET
clause is present, then subtype TEXT
(or 1
) is used.
3.12.3. Array Data Types Syntax
The array data types hold multiple scalar values in a single or multi-dimensional array. For more information, see Array Types
<array_datatype> ::= {SMALLINT | INT[EGER] | BIGINT | INT128} <array_dim> | {REAL | FLOAT [(bin_prec)] | DOUBLE PRECISION} <array_dim> | DECFLOAT [(dec_prec)] <array_dim> | BOOLEAN <array_dim> | DATE <array_dim> | TIME [{WITHOUT | WITH} TIME ZONE] <array_dim> | TIMESTAMP [{WITHOUT | WITH} TIME ZONE] <array_dim> | {DECIMAL | DEC | NUMERIC} [(precision [, scale])] <array_dim> | {VARCHAR | {CHAR | CHARACTER} VARYING} (length) <array_dim> [CHARACTER SET charset] | {CHAR | CHARACTER} [(length)] <array_dim> [CHARACTER SET charset] | {NCHAR | NATIONAL {CHARACTER | CHAR}} VARYING (length) <array_dim> | {NCHAR | NATIONAL {CHARACTER | CHAR}} [(length)] <array_dim> | BINARY [(length)] <array_dim> | {VARBINARY | BINARY VARYING} (length) <array_dim> <array_dim> ::= '[' [m:]n [,[m:]n ...] ']'
Argument | Description |
---|---|
array_dim |
Array dimensions |
bin_prec |
Binary precision, default is 24. 1 - 24: 32-bit single precision |
dec_prec |
Decimal precision of |
precision |
Numeric precision in decimal digits. From 1 to 38 |
scale |
Scale, or number of decimals. From 0 to 38. It must be less than or equal to precision |
length |
The maximum length of a string, in characters, or — for |
charset |
Character set |
m, n |
Integer numbers defining the index range of an array dimension |
4. Common Language Elements
This chapter covers the elements that are common throughout the implementation of the SQL language — the expressions that are used to extract and operate on conditions about data and the predicates that test the truth of those assertions.
4.1. Expressions
SQL expressions provide formal methods for evaluating, transforming and comparing values. SQL expressions may include table columns, variables, constants, literals, various statements and predicates and also other expressions. The complete list of possible tokens in expressions follows.
- Column name
-
Identifier of a column from a specified table used in evaluations or as a search condition. A column of the array type cannot be an element in an expression except when used with the
IS [NOT] NULL
predicate. - Array element
-
An expression may contain a reference to an array member i.e.,
<array_name>[s]
, wheres
is the subscript of the member in the array<array_name>
- Arithmetic operators
-
The
+
,-
,*
,/
characters used to calculate values - Concatenation operator
-
The
||
(“double-pipe”) operator used to concatenate strings - Logical operators
-
The reserved words
NOT
,AND
andOR
, used to combine simple search conditions to create complex conditions - Comparison operators
-
The symbols
=
,<>
,!=
,~=
,^=
,<
,<=
,>
,>=
,!<
,~<
,^<
,!>
,~>
and^>
- Comparison predicates
-
LIKE
,STARTING WITH
,CONTAINING
,SIMILAR TO
,BETWEEN
,IS [NOT] NULL
,IS [NOT] {TRUE | FALSE | UNKNOWN}
andIS [NOT] DISTINCT FROM
- Existential predicates
-
Predicates used to check the existence of values in a set. The
IN
predicate can be used both with sets of comma-separated constants and with subqueries that return a single column. TheEXISTS
,SINGULAR
,ALL
,ANY
andSOME
predicates can be used only with sub-queries. - Constant or Literal
-
Numbers, or string literals enclosed in apostrophes or Q-strings, Boolean values
TRUE
,FALSE
andUNKOWN
,NULL
- Datetime literal
-
An expression, similar to a string literal enclosed in apostrophes, that can be interpreted as a date, time or timestamp value. Datetime literals can be strings of characters and numerals, such as
TIMESTAMP '25.12.2016 15:30:35'
, that can be resolved as datetime value. - Datetime mnemonics
-
A string literal with a description of a desired datetime value that can be cast to a datetime type. For example
'TODAY'
,'NOW'
. - Context variable
-
An internally-defined context variable
- Local variable
-
Declared local variable, input or output parameter of a PSQL module (stored procedure, stored function, trigger, or unnamed PSQL block in DSQL)
- Positional parameter
-
A member of an ordered group of one or more unnamed parameters passed to a stored procedure or prepared query
- Subquery
-
A
SELECT
statement enclosed in parentheses that returns a single (scalar) value or, when used in existential predicates, a set of values - Function identifier
-
The identifier of an internal, packaged, stored or external function in a function expression
- Type cast
-
An expression explicitly converting data of one data type to another using the
CAST
function (CAST (<value> AS <datatype>)
). For datetime literals only, the shorthand syntax<datatype> <value>
is also supported (DATE '2016-12-25'
). - Conditional expression
-
Expressions using CASE and related internal functions
- Parentheses
-
Bracket pairs
(…)
used to group expressions. Operations inside the parentheses are performed before operations outside them. When nested parentheses are used, the most deeply nested expressions are evaluated first and then the evaluations move outward through the levels of nesting. COLLATE
clause-
Clause applied to CHAR and VARCHAR types to specify the character-set-specific collation to use in string comparisons
NEXT VALUE FOR sequence
-
Expression for obtaining the next value of a specified generator (sequence). The internal function
GEN_ID()
does the same. AT
expression-
Expression to change the time zone of a datetime.
4.1.1. Literals (Constants)
A literal — or constant — is a value that is supplied directly in an SQL statement, not derived from an expression, a parameter, a column reference nor a variable. It can be a string or a number.
String Literals
A string literal is a series of characters enclosed between a pair of apostrophes (“single quotes”).
The maximum length of a string literal is 32,765 for CHAR
/VARCHAR
, or 65,533 bytes for BLOB
;
the maximum character count will be determined by the number of bytes used to encode each character.
Formally, the syntax of a normal string literal is:
<char-literal> ::= [<introducer> charset-name] <quote> [<char>...] <quote> [{ <separator> <quote> [<char>...] <quote> }... ] <separator> ::= { <comment> | <white space> } <introducer> ::= underscore (U+005F) <quote> ::= apostrophe (U+0027) <char> ::= character representation; apostrophe is escaped by doubling
-
In Dialect 3, double quotes are not valid for quoting strings. The SQL standard reserves double quotes for a different purpose: delimiting or quoting identifiers.
-
If a literal apostrophe is required within a string constant, it is “escaped” by prefixing it with another apostrophe. For example,
'Mother O''Reilly''s home-made hooch'
. Or use the alternative quote literal:q'{Mother O'Reilly's home-made hooch}'
-
Care should be taken with the string length if the value is to be written to a
CHAR
orVARCHAR
column. The maximum length for aCHAR
orVARCHAR
literal is 32,765 bytes.
The character set of a string constant is assumed to be the same as the character set of its destined storage.
-- Literal containing single quote
select 'O''Reilly' from RDB$DATABASE;
-- output: O'Reilly
-- whitespace between literal
select 'ab'
'cd'
from RDB$DATABASE;
-- output: abcd
-- comment and whitespace between literal
select 'ab' /* comment */ 'cd'
from RDB$DATABASE;
-- output: abcd
String Literals in Hexadecimal Notation
String literals can also be entered in hexadecimal notation, so-called “binary strings”.
Each pair of hex digits defines one byte in the string.
Strings entered this way will be type BINARY
(a.k.a. CHAR CHARACTER SET OCTETS
) by default, unless the introducer syntax is used to force a string to be interpreted as another character set.
<binary-literal> ::= [<introducer> charsetname] X <quote> [<space>...] [{ <hexit> [<space>...] <hexit> [<space>...] }...] <quote> [{ <separator> <quote> [<space>...] [{ <hexit> [<space>...] <hexit> [<space>...] }...] <quote> }...] <hexdigit> ::= one of 0..9, A..F, a..f <space> ::= the space character (U+0020) !! For further rules, see Character String Literal Syntax !!
select x'4E657276656E' from rdb$database
-- returns 4E657276656E, a 6-byte 'binary' string
select _ascii x'4E657276656E' from rdb$database
-- returns 'Nerven' (same string, now interpreted as ASCII text)
select _iso8859_1 x'53E46765' from rdb$database
-- returns 'Säge' (4 chars, 4 bytes)
select _utf8 x'53C3A46765' from rdb$database
-- returns 'Säge' (4 chars, 5 bytes)
-- Group per byte (whitespace inside literal)
select _win1252 x'42 49 4e 41 52 59'
from RDB$DATABASE;
-- output: BINARY
-- whitespace between literal
select _win1252 x'42494e'
'415259'
from RDB$DATABASE;
-- output: BINARY
The client interface determines how binary strings are displayed to the user.
The isql utility, for example, uses upper case letters A-F, while FlameRobin uses lower case letters.
Other client programs may use other conventions, such as displaying spaces between the byte pairs: '4E 65 72 76 65 6E'
.
The hexadecimal notation allows any byte value (including 00) to be inserted at any position in the string. However, if you want to coerce it to anything other than OCTETS, it is your responsibility to supply the bytes in a sequence that is valid for the target character set.
The usage of the _win1252
introducer in above example is a non-standard extension and equivalent to an explicit cast to a CHAR
of appropriate length with character set WIN1252.
Alternative String Literals
It is possible to use a character, or character pair, other than the doubled (escaped) apostrophe, to embed a quoted string inside another string without the need to escape the quote.
The keyword q
or Q
preceding a quoted string informs the parser that certain left-right pairs or pairs of identical characters within the string are the delimiters of the embedded string literal.
<alternative string literal> ::= { q | Q } <quote> <start char> [<char> ...] <end char> <quote>
-
When
<start char>
is ‘(
’, ‘{
’, ‘[
’ or ‘<
’,<end char>
is paired up with its respective “partner”, viz. ‘)
’, ‘}
’, ‘]
’ and ‘>
’. -
In other cases,
<end char>
is the same as<start char>
. -
Inside the string, i.e.
<char>
items, single quotes can be used without escaping. Each quote will be part of the result string.
select q'{abc{def}ghi}' from rdb$database; -- result: abc{def}ghi
select q'!That's a string!' from rdb$database; -- result: That's a string
Introducer Syntax for String Literals
If necessary, a string literal may be preceded by a character set name, itself prefixed with an underscore “_”. This is known as introducer syntax. Its purpose is to inform the engine about how to interpret and store the incoming string.
Example
INSERT INTO People
VALUES (_ISO8859_1 'Hans-Jörg Schäfer')
Number Literals
A number literal is any valid number in a supported notation:
-
In SQL, for numbers in the standard decimal notation, the decimal point is always represented by period character (‘
.
’, full-stop, dot); thousands are not separated. Inclusion of commas, blanks, etc. will cause errors. -
Exponential notation is supported. For example,
0.0000234
can be expressed as2.34e-5
. However, while the literal0.0000234
is aNUMERIC(18,7)
, the literal2.34e-5
is aDOUBLE PRECISION
. -
Hexadecimal notation — see below.
The format of the literal decides the type (<d>
for a decimal digit, <h>
for a hexadecimal digit):
Format | Type |
---|---|
|
|
|
|
|
For backwards compatibility, some values of 19 digits are mapped to |
|
|
Hexadecimal Notation for Numbers
Integer values can also be entered in hexadecimal notation.
Numbers with 1-8 hex digits will be interpreted as type INTEGER
;
numbers with 9-16 hex digits as type BIGINT
;
numbers with 17-32 hex digits as type INT128
.
0{x|X}<hexdigits> <hexdigits> ::= 1-32 of <hexdigit> <hexdigit> ::= one of 0..9, A..F, a..f
select 0x6FAA0D3 from rdb$database -- returns 117088467
select 0x4F9 from rdb$database -- returns 1273
select 0x6E44F9A8 from rdb$database -- returns 1850014120
select 0x9E44F9A8 from rdb$database -- returns -1639646808 (an INTEGER)
select 0x09E44F9A8 from rdb$database -- returns 2655320488 (a BIGINT)
select 0x28ED678A4C987 from rdb$database -- returns 720001751632263
select 0xFFFFFFFFFFFFFFFF from rdb$database -- returns -1
-
Hex numbers in the range 0 … 7FFF FFFF are positive
INTEGER
s with values between 0 … 2147483647 decimal. To coerce a number toBIGINT
, prepend enough zeroes to bring the total number of hex digits to nine or above. That changes the type but not the value. -
Hex numbers between 8000 0000 … FFFF FFFF require some attention:
-
When written with eight hex digits, as in
0x9E44F9A8
, a value is interpreted as 32-bitINTEGER
. Since the leftmost bit (sign bit) is set, it maps to the negative range -2147483648 … -1 decimal. -
With one or more zeroes prepended, as in
0x09E44F9A8
, a value is interpreted as 64-bitBIGINT
in the range 0000 0000 8000 0000 … 0000 0000 FFFF FFFF. The sign bit is not set now, so they map to the positive range 2147483648 … 4294967295 decimal.
Thus, in this range, and for 16 vs 16+ digits, prepending a mathematically insignificant 0 results in a different value. This is something to be aware of.
-
-
Hex numbers between 0 0000 0001 … 7FFF FFFF FFFF FFFF are all positive
BIGINT
. -
Hex numbers between 8000 0000 0000 0000 … FFFF FFFF FFFF FFFF are all negative
BIGINT
. -
Hex numbers between 0 0000 0000 0000 0001 … 7FFF FFFF FFFF FFFF FFFF FFFF FFFF FFFF are all positive
INT128
-
Hex numbers between 8000 0000 0000 0000 0000 0000 0000 0000 … FFFF FFFF FFFF FFFF FFFF FFFF FFFF FFFF are all negative
INT128
-
A
SMALLINT
cannot be written in hex, strictly speaking, since even0x0
and0x1
are evaluated asINTEGER
. However, if you write a positive integer within the 16-bit range0x0000
(decimal zero) to0x7FFF
(decimal 32767) it will be converted toSMALLINT
transparently.It is possible to write to a negative
SMALLINT
in hex, using a 4-byte hex number within the range0xFFFF8000
(decimal -32768) to0xFFFFFFFF
(decimal -1).
Datetime Literals
Formally, the SQL standard defines datetime literals as a prefix DATE
, TIME
and TIMESTAMP
followed by a string literal with a datetime format.
Historically, Firebird documentation has referred to these datetime literals as “shorthand casts”.
Since Firebird 4.0, the use of datetime mnemonics in datetime literals (e.g. DATE 'TODAY'
) is no longer allowed.
The format of datetime literals and strings in Firebird 4.0 and higher is more strict compared to earlier Firebird versions. |
<datetime_literal> ::= DATE '<date_format>' | TIME { '<time_format>' | '<time_tz_format>' } | TIMESTAMP { '<timestamp_format>' | '<timestamp_tz_format>' }
<date_format> ::= [YYYY<p>]MM<p>DD | MM<p>DD[<p>{ YYYY | YY }] | DD<p>MM[<p>{ YYYY | YY }] <time_format> ::= HH[:mm[:SS[<f>NNNN]]] <timestamp_format> ::= <date_format> [<space> <time_format>] <time_zone> ::= { + | - }HH:MM | time zone name (e.g. Europe/Berlin) <time_tz_format> ::= <time_format> [<space>] <time_zone> <timestamp_tz_format> ::= <timestamp_format> [<space>] <time_zone> <p> ::= whitespace | . | - | / <f> ::= : | .
Argument | Description |
---|---|
datetime_literal |
Datetime literal |
date_format |
Format of date |
time_format |
Format of time |
timestamp_format |
Format of timestamp |
time_zone |
Format of time zone |
time_tz_format |
Format of time with time zone |
timestamp_tz_format |
Format of timestamp with time zone |
YYYY |
Four-digit year |
YY |
Two-digit year |
MM |
Month It may contain 1 or 2 digits (1-12 or 01-12). You can also specify the three-letter shorthand name or the full name of a month in English. Case-insensitive |
DD |
Day. It may contain 1 or 2 digits (1-31 or 01-31) |
HH |
Hour. It may contain 1 or 2 digits (0-23 or 00-23) |
mm |
Minutes. It may contain 1 or 2 digits (0-59 or 00-59) |
SS |
Seconds. It may contain 1 or 2 digits (0-59 or 00-59) |
NNNN |
Ten-thousandths of a second. It may contain from 1 to 4 digits (0-9999) |
p |
A date separator, any of permitted characters.
Leading and trailing spaces are ignored.
The choice of separator in a date decides whether the parser reads |
f |
Fractional seconds separator |
Use of the complete specification of the year in the four-digit form — |
-- 1
UPDATE PEOPLE
SET AGECAT = 'SENIOR'
WHERE BIRTHDATE < DATE '1-Jan-1943';
-- 2
INSERT INTO APPOINTMENTS
(EMPLOYEE_ID, CLIENT_ID, APP_DATE, APP_TIME)
VALUES (973, 8804, DATE '1-Jan-2021' + 2, TIME '16:00');
-- 3
NEW.LASTMOD = TIMESTAMP '1-Jan-2021 16:00';
Although the Firebird datetime syntax is flexible, to avoid ambiguity we recommend using the ISO-8601 order (year-month-day), ‘ In short, use |
4.1.2. SQL Operators
SQL operators comprise operators for comparing, calculating, evaluating and concatenating values.
Operator Precedence
SQL Operators are divided into four types. Each operator type has a precedence, a ranking that determines the order in which operators and the values obtained with their help are evaluated in an expression. The higher the precedence of the operator type is, the earlier it will be evaluated. Each operator has its own precedence within its type, that determines the order in which they are evaluated in an expression.
Operators with the same precedence are evaluated from left to right. To force a different evaluation order, operations can be grouped by means of parentheses.
Operator Type | Precedence | Explanation |
---|---|---|
Concatenation |
1 |
Strings are concatenated before any other operations take place |
Arithmetic |
2 |
Arithmetic operations are performed after strings are concatenated, but before comparison and logical operations |
Comparison |
3 |
Comparison operations take place after string concatenation and arithmetic operations, but before logical operations |
Logical |
4 |
Logical operators are executed after all other types of operators |
Concatenation Operator
The concatenation operator — two pipe characters known as “double pipe” or ‘||
’ — concatenates two character strings to form a single string.
Character strings can be literals or values obtained from columns or other expressions.
SELECT LAST_NAME || ', ' || FIRST_NAME AS FULL_NAME
FROM EMPLOYEE
Arithmetic Operators
Operator | Purpose | Precedence |
---|---|---|
|
Unary plus |
1 |
|
Unary minus |
1 |
|
Multiplication |
2 |
|
Division |
2 |
|
Addition |
3 |
|
Subtraction |
3 |
Where operators have the same precedence, they are evaluated in left-to-right sequence.
UPDATE T
SET A = 4 + 1/(B-C)*D
Comparison Operators
Operator | Purpose | Precedence |
---|---|---|
|
Checks that the expression on the left is (not) |
1 |
|
Is equal to, is identical to |
2 |
|
Is not equal to |
2 |
|
Is greater than |
2 |
|
Is less than |
2 |
|
Is greater than or equal to |
2 |
|
Is less than or equal to |
2 |
|
Is not greater than |
2 |
|
Is not less than |
2 |
This group also includes comparison predicates BETWEEN
, LIKE
, CONTAINING
, SIMILAR TO
and others.
IF (SALARY > 1400) THEN
...
Logical Operators
Operator | Purpose | Precedence |
---|---|---|
|
Negation of a search condition |
1 |
|
Combines two or more predicates, each of which must be true for the entire predicate to be true |
2 |
|
Combines two or more predicates, of which at least one predicate must be true for the entire predicate to be true |
3 |
IF (A < B OR (A > C AND A > D) AND NOT (C = D)) THEN ...
NEXT VALUE FOR
BIGINT
— dialect 2 and 3
INTEGER
— dialect 1
NEXT VALUE FOR sequence-name
NEXT VALUE FOR
returns the next value of a sequence.
Sequence is the SQL-standard term for what is historically called a generator in Firebird and its ancestor, InterBase.
The NEXT VALUE FOR
operator is equivalent to the legacy GEN_ID (…, increment)
function with increment the increment stored in the metadata of the sequence.
It is the recommended syntax for retrieving the next sequence value.
Unlike the GEN_ID
function, the NEXT VALUE FOR
expression does not take any parameters and thus provides no way to retrieve the current value of a sequence, nor to step the next value by a different value than the increment configured for the sequence.
GEN_ID (…, <step value>)
is still needed for these tasks.
A step value of 0 returns the current sequence value.
The increment of a sequence can be configured with the INCREMENT
clause of CREATE SEQUENCE
or ALTER SEQUENCE
.
NEW.CUST_ID = NEXT VALUE FOR CUSTSEQ;
AT
Time Zone Expression
<at expr> ::= <expr> AT { TIME ZONE <time zone string> | LOCAL }
The AT
expression expresses a datetime value in a different time zone, while keeping the same UTC instant.
AT
translates a time/timestamp value to its corresponding value in another time zone.
If LOCAL
is used, the value is converted to the session time zone.
When expr is a WITHOUT TIME ZONE
type, expr is first converted to a WITH TIME ZONE
in the session time zone and then transformed to the specified time zone.
select time '12:00 GMT' at time zone '-03:00' from rdb$database;
select current_timestamp at time zone 'America/Sao_Paulo' from rdb$database;
select timestamp '2018-01-01 12:00 GMT' at local from rdb$database;
4.1.3. Conditional Expressions
A conditional expression is one that returns different values according to how a certain condition is met.
It is composed by applying a conditional function construct, of which Firebird supports several.
This section describes only one conditional expression construct: CASE
.
All other conditional expressions apply internal functions derived from CASE
and are described in Conditional Functions.
CASE
The CASE
construct returns a single value from a number of possible values.
Two syntactic variants are supported:
-
The simple
CASE
, comparable to a case construct in Pascal or a switch in C -
The searched
CASE
, which works like a series of “if … else if … else if
” clauses.
Simple CASE
... CASE <test-expr> WHEN <expr> THEN <result> [WHEN <expr> THEN <result> ...] [ELSE <defaultresult>] END ...
When this variant is used, test-expr is compared to the first expr, second expr and so on, until a match is found, and the corresponding result is returned.
If no match is found, defaultresult from the optional ELSE
clause is returned.
If there are no matches and no ELSE
clause, NULL
is returned.
The matching works as the “=
” operator.
That is, if test-expr is NULL
, it does not match any expr, not even an expression that resolves to NULL
.
The returned result does not have to be a literal value: it might be a field or variable name, compound expression or NULL
literal.
SELECT
NAME,
AGE,
CASE UPPER(SEX)
WHEN 'M' THEN 'Male'
WHEN 'F' THEN 'Female'
ELSE 'Unknown'
END GENDER,
RELIGION
FROM PEOPLE
A short form of the simple CASE
construct is the DECODE
function.
Searched CASE
CASE WHEN <bool_expr> THEN <result> [WHEN <bool_expr> THEN <result> ...] [ELSE <defaultresult>] END
The bool_expr expression is one that gives a ternary logical result: TRUE
, FALSE
or NULL
.
The first expression to return TRUE
determines the result.
If no expressions return TRUE
, defaultresult from the optional ELSE
clause is returned as the result.
If no expressions return TRUE
and there is no ELSE
clause, the result will be NULL
.
As with the simple CASE
construct, the result need not be a literal value: it might be a field or variable name, a compound expression, or be NULL
.
CANVOTE = CASE
WHEN AGE >= 18 THEN 'Yes'
WHEN AGE < 18 THEN 'No'
ELSE 'Unsure'
END
4.1.4. NULL
in Expressions
NULL
is not a value in SQL, but a state indicating that the value of the element either is unknown or it does not exist.
It is not a zero, nor a void, nor an “empty string”, and it does not act like any value.
When you use NULL
in numeric, string or date/time expressions, the result will always be NULL
.
When you use NULL
in logical (Boolean) expressions, the result will depend on the type of the operation and on other participating values.
When you compare a value to NULL
, the result will be unknown.
In SQL, the logical result unknown is also represented by NULL
.
Consult the
Firebird Null Guide
for more in-depth coverage of Firebird’s NULL
behaviour.
Expressions Returning NULL
Expressions in this list will always return NULL
:
1 + 2 + 3 + NULL
'Home ' || 'sweet ' || NULL
MyField = NULL
MyField <> NULL
NULL = NULL
not (NULL)
If it seems difficult to understand why, remember that NULL
is a state that stands for “unknown”.
NULL
in Logical Expressions
It has already been shown that NOT (NULL)
results in NULL
.
The interaction is a bit more complicated for the logical AND
and logical OR
operators:
NULL or false → NULL NULL or true → true NULL or NULL → NULL NULL and false → false NULL and true → NULL NULL and NULL → NULL
As a basic rule-of-thumb, if substituting |
(1 = NULL) or (1 <> 1) -- returns NULL
(1 = NULL) or FALSE -- returns NULL
(1 = NULL) or (1 = 1) -- returns TRUE
(1 = NULL) or TRUE -- returns TRUE
(1 = NULL) or (1 = NULL) -- returns NULL
(1 = NULL) or UNKNOWN -- returns NULL
(1 = NULL) and (1 <> 1) -- returns FALSE
(1 = NULL) and FALSE -- returns FALSE
(1 = NULL) and (1 = 1) -- returns NULL
(1 = NULL) and TRUE -- returns NULL
(1 = NULL) and (1 = NULL) -- returns NULL
(1 = NULL) and UNKNOWN -- returns NULL
4.1.5. Subqueries
A subquery is a special form of expression that is a query embedded within another query.
Subqueries are written in the same way as regular SELECT
queries, but they must be enclosed in parentheses.
Subquery expressions can be used in the following ways:
-
To specify an output column in the SELECT list
-
To obtain values or conditions for search predicates (the
WHERE
,HAVING
clauses). -
To produce a set that the enclosing query can select from, as though were a regular table or view. Subqueries like this appear in the
FROM
clause (derived tables) or in a Common Table Expression (CTE)
Correlated Subqueries
A subquery can be correlated. A query is correlated when the subquery and the main query are interdependent. To process each record in the subquery, it is necessary to fetch a record in the main query, i.e. the subquery fully depends on the main query.
SELECT *
FROM Customers C
WHERE EXISTS
(SELECT *
FROM Orders O
WHERE C.cnum = O.cnum
AND O.adate = DATE '10.03.1990');
When subqueries are used to get the values of the output column in the SELECT
list, a subquery must return a scalar result (see below).
Scalar Results
Subqueries used in search predicates, other than existential and quantified predicates, must return a scalar result; that is, not more than one column from not more than one matching row or aggregation. If the query returns more columns or rows, a run-time error will occur (“Multiple rows in a singleton select…”).
Although it is reporting a genuine error, the message can be slightly misleading. A “singleton SELECT” is a query that must not be capable of returning more than one row. However, “singleton” and “scalar” are not synonymous: not all singleton SELECTS are required to be scalar; and single-column selects can return multiple rows for existential and quantified predicates. |
-
A subquery as the output column in a
SELECT
list:SELECT e.first_name, e.last_name, (SELECT sh.new_salary FROM salary_history sh WHERE sh.emp_no = e.emp_no ORDER BY sh.change_date DESC ROWS 1) AS last_salary FROM employee e
-
A subquery in the
WHERE
clause for obtaining the employee’s maximum salary and filtering by it:SELECT e.first_name, e.last_name, e.salary FROM employee e WHERE e.salary = ( SELECT MAX(ie.salary) FROM employee ie )
4.2. Predicates
A predicate is a simple expression asserting some fact, let’s call it P
.
If P
resolves as TRUE, it succeeds.
If it resolves to FALSE or NULL (UNKNOWN), it fails.
A trap lies here, though: suppose the predicate, P
, returns FALSE.
In this case NOT(P)
will return TRUE.
On the other hand, if P
returns NULL (unknown), then NOT(P)
returns NULL as well.
In SQL, predicates can appear in CHECK
constraints, WHERE
and HAVING
clauses, CASE
expressions, the IIF()
function and in the ON
condition of JOIN
clauses, and anywhere a normal expression can occur.
4.2.1. Conditions
A condition — or Boolean expression — is a statement about the data that, like a predicate, can resolve to TRUE, FALSE or NULL.
Conditions consist of one or more predicates, possibly negated using NOT
and connected by AND
and OR
operators.
Parentheses may be used for grouping predicates and controlling evaluation order.
A predicate may embed other predicates. Evaluation sequence is in the outward direction, i.e. the innermost predicates are evaluated first. Each “level” is evaluated in precedence order until the truth value of the ultimate condition is resolved.
4.2.2. Comparison Predicates
A comparison predicate consists of two expressions connected with a comparison operator. There are six traditional comparison operators:
=, >, <, >=, <=, <>
For the complete list of comparison operators with their variant forms, see Comparison Operators.
If one of the sides (left or right) of a comparison predicate has NULL
in it, the value of the predicate will be UNKNOWN
.
-
Retrieve information about computers with the CPU frequency not less than 500 MHz and the price lower than $800:
SELECT * FROM Pc WHERE speed >= 500 AND price < 800;
-
Retrieve information about all dot matrix printers that cost less than $300:
SELECT * FROM Printer WHERE ptrtype = 'matrix' AND price < 300;
-
The following query will return no data, even if there are printers with no type specified for them, because a predicate that compares
NULL
withNULL
returnsNULL
:SELECT * FROM Printer WHERE ptrtype = NULL AND price < 300;
On the other hand,
ptrtype
can be tested forNULL
and return a result: it is just that it is not a comparison test:SELECT * FROM Printer WHERE ptrtype IS NULL AND price < 300;
— see
IS [NOT] NULL
.
Note about String Comparison
When |
Other Comparison Predicates
Other comparison predicates are marked by keyword symbols.
BETWEEN
<value> [NOT] BETWEEN <value_1> AND <value_2>
The BETWEEN
predicate tests whether a value falls within a specified range of two values.
(NOT BETWEEN
tests whether the value does not fall within that range.)
The operands for BETWEEN
predicate are two arguments of compatible data types.
The BETWEEN
predicate in Firebird is asymmetrical — if the lower bound is not the first argument, the BETWEEN
predicate will return FALSE.
The search is inclusive (the values represented by both arguments are included in the search).
In other words, the BETWEEN
predicate could be rewritten:
<value> >= <value_1> AND <value> <= <value_2>
When BETWEEN
is used in the search conditions of DML queries, the Firebird optimizer can use an index on the searched column, if it is available.
SELECT *
FROM EMPLOYEE
WHERE HIRE_DATE BETWEEN date '1992-01-01' AND CURRENT_DATE
LIKE
<match_value> [NOT] LIKE <pattern> [ESCAPE <escape character>] <match_value> ::= character-type expression <pattern> ::= search pattern <escape character> ::= escape character
The LIKE
predicate compares the character-type expression with the pattern defined in the second expression.
Case- or accent-sensitivity for the comparison is determined by the collation that is in use.
A collation can be specified for either operand, if required.
Two wildcard symbols are available for use in the search pattern:
-
the percentage symbol (
%
) will match any sequence of zero or more characters in the tested value -
the underscore character (
_
) will match any single character in the tested value
If the tested value matches the pattern, taking into account wildcard symbols, the predicate is TRUE.
ESCAPE
Character OptionIf the search string contains either of the wildcard symbols, the ESCAPE
clause can be used to specify an escape character.
The escape character must precede the ‘%
’ or ‘_
’ symbol in the search string, to indicate that the symbol is to be interpreted as a literal character.
LIKE
-
Find the numbers of departments whose names start with the word “Software”:
SELECT DEPT_NO FROM DEPT WHERE DEPT_NAME LIKE 'Software%';
In this example, it is possible to use an index on the DEPT_NAME field if it exists.
AboutLIKE
and the OptimizerThe
LIKE
predicate itself does not use an index. However, if the predicate takes the form ofLIKE 'string%'
(i.e. the only wildcard is%
at the end), it will be converted toSTARTING WITH 'string'
, which can use an index. This optimization only works for literal patterns, not for parameters.So, if you need to search for the beginning of a string, it is recommended to use the
STARTING WITH
predicate instead of theLIKE
predicate. -
Search for employees whose names consist of 5 letters, start with the letters “Sm” and end with “th”. The predicate will be true for such names as “Smith” and “Smyth”.
SELECT first_name FROM employee WHERE first_name LIKE 'Sm_th'
-
Search for all clients whose address contains the string “Rostov”:
SELECT * FROM CUSTOMER WHERE ADDRESS LIKE '%Rostov%'
If you need to do a case-insensitive search for something enclosed inside a string (
LIKE '%Abc%'
), use of theCONTAINING
predicate is recommended. -
Search for tables containing the underscore character in their names. The ‘
#
’ character is used as the escape character:SELECT RDB$RELATION_NAME FROM RDB$RELATIONS WHERE RDB$RELATION_NAME LIKE '%#_%' ESCAPE '#'
STARTING WITH
<value> [NOT] STARTING WITH <value>
The STARTING WITH
predicate searches for a string or a string-like type that starts with the characters in its value argument.
The case- and accent-sensitivity of STARTING WITH
depends on the collation of the first value.
When STARTING WITH
is used in the search conditions of DML queries, the Firebird optimizer can use an index on the searched column, if it exists.
Search for employees whose last names start with “Jo”:
SELECT LAST_NAME, FIRST_NAME
FROM EMPLOYEE
WHERE LAST_NAME STARTING WITH 'Jo'
CONTAINING
<value> [NOT] CONTAINING <value>
The CONTAINING
predicate searches for a string or a string-like type looking for the sequence of characters that matches its argument.
It can be used for an alphanumeric (string-like) search on numbers and dates.
A CONTAINING
search is not case-sensitive.
However, if an accent-sensitive collation is in use then the search will be accent-sensitive.
-
Search for projects whose names contain the substring “Map”:
SELECT * FROM PROJECT WHERE PROJ_NAME CONTAINING 'Map';
Two rows with the names “AutoMap” and “MapBrowser port” are returned.
-
Search for changes in salaries with the date containing number 84 (in this case, it means changes that took place in 1984):
SELECT * FROM SALARY_HISTORY WHERE CHANGE_DATE CONTAINING 84;
SIMILAR TO
string-expression [NOT] SIMILAR TO <pattern> [ESCAPE <escape-char>] <pattern> ::= an SQL regular expression <escape-char> ::= a single character
SIMILAR TO
matches a string against an SQL regular expression pattern.
Unlike in some other languages, the pattern must match the entire string to succeed — matching a substring is not enough.
If any operand is NULL
, the result is NULL
.
Otherwise, the result is TRUE
or FALSE
.
If a literal pattern is used, and it doesn’t start with a wildcard or other special regex character, SIMILAR TO
can use an index.
The following syntax defines the SQL regular expression format. It is a complete and correct top-down definition. It is also highly formal and long, and may be daunting to anyone who hasn’t already some experience with regular expressions (or with highly formal, rather long top-down definitions). Feel free to skip it and read the next section, Building Regular Expressions, which uses a bottom-up approach, aimed at the rest of us.
<regular expression> ::= <regular term> ['|' <regular term> ...] <regular term> ::= <regular factor> ... <regular factor> ::= <regular primary> [<quantifier>] <quantifier> ::= ? | * | + | '{' <m> [,[<n>]] '}' <m>, <n> ::= unsigned int, with <m> <= <n> if both present <regular primary> ::= <character> | <character class> | % | (<regular expression>) <character> ::= <escaped character> | <non-escaped character> <escaped character> ::= <escape-char> <special character> | <escape-char> <escape-char> <special character> ::= any of the characters []()|^-+*%_?{} <non-escaped character> ::= any character that is not a <special character> and not equal to <escape-char> (if defined) <character class> ::= '_' | '[' <member> ... ']' | '[^' <non-member> ... ']' | '[' <member> ... '^' <non-member> ... ']' <member>, <non-member> ::= <character> | <range> | <predefined class> <range> ::= <character>-<character> <predefined class> ::= '[:' <predefined class name> ':]' <predefined class name> ::= ALPHA | UPPER | LOWER | DIGIT | ALNUM | SPACE | WHITESPACE
In this section are the elements and rules for building SQL regular expressions.
Within regular expressions, most characters represent themselves. The only exceptions are the special characters below:
[ ] ( ) | ^ - + * % _ ? { }
... and the escape character, if it is defined.
A regular expression that contains no special characters or escape characters matches only strings that are identical to itself (subject to the collation in use).
That is, it functions just like the ‘=
’ operator:
'Apple' similar to 'Apple' -- true
'Apples' similar to 'Apple' -- false
'Apple' similar to 'Apples' -- false
'APPLE' similar to 'Apple' -- depends on collation
The known SQL wildcards ‘_
’ and ‘%
’ match any single character and a string of any length, respectively:
'Birne' similar to 'B_rne' -- true
'Birne' similar to 'B_ne' -- false
'Birne' similar to 'B%ne' -- true
'Birne' similar to 'Bir%ne%' -- true
'Birne' similar to 'Birr%ne' -- false
Notice how ‘%
’ also matches the empty string.
A bunch of characters enclosed in brackets define a character class. A character in the string matches a class in the pattern if the character is a member of the class:
'Citroen' similar to 'Cit[arju]oen' -- true
'Citroen' similar to 'Ci[tr]oen' -- false
'Citroen' similar to 'Ci[tr][tr]oen' -- true
As can be seen from the second line, the class only matches a single character, not a sequence.
Within a class definition, two characters connected by a hyphen define a range. A range comprises the two endpoints and all the characters that lie between them in the active collation. Ranges can be placed anywhere in the class definition without special delimiters to keep them apart from the other elements.
'Datte' similar to 'Dat[q-u]e' -- true
'Datte' similar to 'Dat[abq-uy]e' -- true
'Datte' similar to 'Dat[bcg-km-pwz]e' -- false
The following predefined character classes can also be used in a class definition:
[:ALPHA:]
-
Latin letters a..z and A..Z. With an accent-insensitive collation, this class also matches accented forms of these characters.
[:DIGIT:]
-
Decimal digits 0..9.
[:ALNUM:]
-
Union of
[:ALPHA:]
and[:DIGIT:]
. [:UPPER:]
-
Uppercase Latin letters A..Z. Also matches lowercase with case-insensitive collation and accented forms with accent-insensitive collation.
[:LOWER:]
-
Lowercase Latin letters a..z. Also matches uppercase with case-insensitive collation and accented forms with accent-insensitive collation.
[:SPACE:]
-
Matches the space character (ASCII 32).
[:WHITESPACE:]
-
Matches horizontal tab (ASCII 9), linefeed (ASCII 10), vertical tab (ASCII 11), formfeed (ASCII 12), carriage return (ASCII 13) and space (ASCII 32).
Including a predefined class has the same effect as including all its members. Predefined classes are only allowed within class definitions. If you need to match against a predefined class and nothing more, place an extra pair of brackets around it.
'Erdbeere' similar to 'Erd[[:ALNUM:]]eere' -- true
'Erdbeere' similar to 'Erd[[:DIGIT:]]eere' -- false
'Erdbeere' similar to 'Erd[a[:SPACE:]b]eere' -- true
'Erdbeere' similar to [[:ALPHA:]] -- false
'E' similar to [[:ALPHA:]] -- true
If a class definition starts with a caret, everything that follows is excluded from the class. All other characters match:
'Framboise' similar to 'Fra[^ck-p]boise' -- false
'Framboise' similar to 'Fr[^a][^a]boise' -- false
'Framboise' similar to 'Fra[^[:DIGIT:]]boise' -- true
If the caret is not placed at the start of the sequence, the class contains everything before the caret, except for the elements that also occur after the caret:
'Grapefruit' similar to 'Grap[a-m^f-i]fruit' -- true
'Grapefruit' similar to 'Grap[abc^xyz]fruit' -- false
'Grapefruit' similar to 'Grap[abc^de]fruit' -- false
'Grapefruit' similar to 'Grap[abe^de]fruit' -- false
'3' similar to '[[:DIGIT:]^4-8]' -- true
'6' similar to '[[:DIGIT:]^4-8]' -- false
Lastly, the already mentioned wildcard ‘_
’ is a character class of its own, matching any single character.
A question mark (‘?
’) immediately following a character or class indicates that the preceding item may occur 0 or 1 times to match:
'Hallon' similar to 'Hal?on' -- false
'Hallon' similar to 'Hal?lon' -- true
'Hallon' similar to 'Halll?on' -- true
'Hallon' similar to 'Hallll?on' -- false
'Hallon' similar to 'Halx?lon' -- true
'Hallon' similar to 'H[a-c]?llon[x-z]?' -- true
An asterisk (‘*
’) immediately following a character or class indicates that the preceding item may occur 0 or more times to match:
'Icaque' similar to 'Ica*que' -- true
'Icaque' similar to 'Icar*que' -- true
'Icaque' similar to 'I[a-c]*que' -- true
'Icaque' similar to '_*' -- true
'Icaque' similar to '[[:ALPHA:]]*' -- true
'Icaque' similar to 'Ica[xyz]*e' -- false
A plus sign (‘+
’) immediately following a character or class indicates that the preceding item must occur 1 or more times to match:
'Jujube' similar to 'Ju_+' -- true
'Jujube' similar to 'Ju+jube' -- true
'Jujube' similar to 'Jujuber+' -- false
'Jujube' similar to 'J[jux]+be' -- true
'Jujube' sililar to 'J[[:DIGIT:]]+ujube' -- false
If a character or class is followed by a number enclosed in braces (‘{
’ and ‘}
’), it must be repeated exactly that number of times to match:
'Kiwi' similar to 'Ki{2}wi' -- false
'Kiwi' similar to 'K[ipw]{2}i' -- true
'Kiwi' similar to 'K[ipw]{2}' -- false
'Kiwi' similar to 'K[ipw]{3}' -- true
If the number is followed by a comma (‘,
’), the item must be repeated at least that number of times to match:
'Limone' similar to 'Li{2,}mone' -- false
'Limone' similar to 'Li{1,}mone' -- true
'Limone' similar to 'Li[nezom]{2,}' -- true
If the braces contain two numbers separated by a comma, the second number not smaller than the first, then the item must be repeated at least the first number and at most the second number of times to match:
'Mandarijn' similar to 'M[a-p]{2,5}rijn' -- true
'Mandarijn' similar to 'M[a-p]{2,3}rijn' -- false
'Mandarijn' similar to 'M[a-p]{2,3}arijn' -- true
The quantifiers ‘?
’, ‘*
’ and ‘+
’ are shorthand for {0,1}
, {0,}
and {1,}
, respectively.
Regular expression terms can be OR’ed with the ‘|
’ operator.
A match is made when the argument string matches at least one of the terms:
'Nektarin' similar to 'Nek|tarin' -- false
'Nektarin' similar to 'Nektarin|Persika' -- true
'Nektarin' similar to 'M_+|N_+|P_+' -- true
One or more parts of the regular expression can be grouped into subexpressions (also called subpatterns) by placing them between parentheses (‘(
’ and ‘)
’).
A subexpression is a regular expression in its own right.
It can contain all the elements allowed in a regular expression, and can also have quantifiers added to it.
'Orange' similar to 'O(ra|ri|ro)nge' -- true
'Orange' similar to 'O(r[a-e])+nge' -- true
'Orange' similar to 'O(ra){2,4}nge' -- false
'Orange' similar to 'O(r(an|in)g|rong)?e' -- true
To match against a character that is special in regular expressions, that character has to be escaped. There is no default escape character; the user specifies one when needed:
'Peer (Poire)' similar to 'P[^ ]+ \(P[^ ]+\)' escape '\' -- true
'Pera [Pear]' similar to 'P[^ ]+ #[P[^ ]+#]' escape '#' -- true
'Päron-äppledryck' similar to 'P%$-ä%' escape '$' -- true
'Pärondryck' similar to 'P%--ä%' escape '-' -- false
The last line demonstrates that the escape character can also escape itself, if needed.
IS [NOT] DISTINCT FROM
<operand1> IS [NOT] DISTINCT FROM <operand2>
Two operands are considered DISTINCT (different) if they have a different value or if one of them is NULL
and the other non-null.
They are considered NOT DISTINCT (equal) if they have the same value or if both of them are NULL
.
IS [NOT] DISTINCT FROM
always returns TRUE or FALSE and never UNKNOWN
(NULL
) (unknown value).
Operators ‘=
’ and ‘<>
’, conversely, will return UNKNOWN
(NULL
) if one or both operands are NULL
.
Operand values |
Result of various predicates |
|||
---|---|---|---|---|
|
|
|
|
|
Same value |
|
|
|
|
Different values |
|
|
|
|
Both |
|
|
|
|
One |
|
|
|
|
SELECT ID, NAME, TEACHER
FROM COURSES
WHERE START_DAY IS NOT DISTINCT FROM END_DAY;
-- PSQL fragment
IF (NEW.JOB IS DISTINCT FROM OLD.JOB)
THEN POST_EVENT 'JOB_CHANGED';
Boolean IS [NOT]
<value> IS [NOT] { TRUE | FALSE | UNKNOWN }
The IS
predicate with Boolean literal values checks if the expression on the left side matches the Boolean value on the right side.
The expression on the left side must be of type BOOLEAN
, otherwise an exception is raised.
The IS [NOT] UNKNOWN
is equivalent to IS [NOT] NULL
.
The right side of the predicate only accepts the literals TRUE
, FALSE
, UNKNOWN
, and NULL
.
It does not accept expressions.
-- Checking FALSE value
SELECT * FROM TBOOL WHERE BVAL IS FALSE;
ID BVAL
============= =======
2 <false>
-- Checking UNKNOWN value
SELECT * FROM TBOOL WHERE BVAL IS UNKNOWN;
ID BVAL
============= =======
3 <null>
IS [NOT] NULL
<value> IS [NOT] NULL
Since NULL
is not a value, these operators are not comparison operators.
The IS [NOT] NULL
predicate tests that the expression on the left side has a value (IS NOT NULL) or has no value (IS NULL).
Search for sales entries that have no shipment date set for them:
SELECT * FROM SALES
WHERE SHIP_DATE IS NULL;
Note regarding the
IS predicatesThe |
4.2.3. Existential Predicates
This group of predicates includes those that use subqueries to submit values for all kinds of assertions in search conditions.
Existential predicates are so called because they use various methods to test for the existence or non-existence of some condition, returning TRUE
if the existence or non-existence is confirmed or FALSE
otherwise.
EXISTS
[NOT] EXISTS (<select_stmt>)
The EXISTS
predicate uses a subquery expression as its argument.
It returns TRUE
if the subquery result contains at least one row, otherwise it returns FALSE
.
NOT EXISTS
returns FALSE
if the subquery result contains at least one row, otherwise it returns TRUE
.
The subquery can specify multiple columns, or SELECT *
, because the evaluation is made on the number of rows that match its criteria, not on the data.
-
Find those employees who have projects.
SELECT * FROM employee WHERE EXISTS(SELECT * FROM employee_project ep WHERE ep.emp_no = employee.emp_no)
-
Find those employees who have no projects.
SELECT * FROM employee WHERE NOT EXISTS(SELECT * FROM employee_project ep WHERE ep.emp_no = employee.emp_no)
IN
<value> [NOT] IN (<select_stmt> | <value_list>) <value_list> ::= <value_1> [, <value_2> ...]
The IN
predicate tests whether the value of the expression on the left side is present in the set of values specified on the right side.
The set of values cannot have more than 65535 items.
The IN
predicate can be replaced with the following equivalent forms:
(<value> = <value_1> [OR <value> = <value_2> ...]) <value> = { ANY | SOME } (<select_stmt>)
When the IN
predicate is used in the search conditions of DML queries, the Firebird optimizer can use an index on the searched column, if a suitable one exists.
Lists that are known to be constant are pre-evaluated as invariants and cached as a binary search tree, making comparisons faster if the condition needs to be tested for many rows or if the value list is long.
In its second form, the IN
predicate tests whether the value of the expression on the left side is present — or not present, if NOT IN
is used — in the result of the subquery on the right side.
The subquery must specify only one column, otherwise the error “count of column list and variable list do not match” will occur.
Queries using an IN
predicate with a subquery can be replaced with a similar query using the EXISTS
predicate.
For example, the following query:
SELECT
model, speed, hd
FROM PC
WHERE
model IN (SELECT model
FROM product
WHERE maker = 'A');
can be replaced with a similar one using the EXISTS
predicate:
SELECT
model, speed, hd
FROM PC
WHERE
EXISTS (SELECT *
FROM product
WHERE maker = 'A'
AND product.model = PC.model);
However, a query using NOT IN
with a subquery does not always give the same result as its NOT EXISTS
counterpart.
The reason is that EXISTS
always returns TRUE or FALSE, whereas IN
returns NULL
in one of these two cases:
-
when the test value is
NULL
and theIN ()
list is not empty -
when the test value has no match in the
IN ()
list and at least one list element isNULL
It is in only these two cases that IN ()
will return NULL
while the EXISTS
predicate will return FALSE
(“no matching row found”).
In a search or, for example, an IF (…)
statement, both results mean “failure”, and it makes no difference to the outcome.
For the same data, NOT IN ()
will return NULL
, while NOT EXISTS
will return TRUE
, leading to opposite results.
As an example, suppose you have the following query:
-- Looking for people who were not born
-- on the same day as any famous New York citizen
SELECT P1.name AS NAME
FROM Personnel P1
WHERE P1.birthday NOT IN (SELECT C1.birthday
FROM Celebrities C1
WHERE C1.birthcity = 'New York');
Now, assume that the NY celebrities list is not empty and contains at least one NULL birthday.
Then for every citizen who does not share his birthday with a NY celebrity, NOT IN
will return NULL
, because that is what IN
does.
The search condition is thereby not satisfied and the citizen will be left out of the SELECT
result, which is wrong.
For citizens whose birthday does match with a celebrity’s birthday, NOT IN
will correctly return FALSE
, so they will be left out too, and no rows will be returned.
If the NOT EXISTS
form is used:
-- Looking for people who were not born
-- on the same day as any famous New York citizen
SELECT P1.name AS NAME
FROM Personnel P1
WHERE NOT EXISTS (SELECT *
FROM Celebrities C1
WHERE C1.birthcity = 'New York'
AND C1.birthday = P1.birthday);
non-matches will have a NOT EXISTS
result of TRUE
and their records will be in the result set.
If there is any chance of |
-
Find employees with the names “Pete”, “Ann” and “Roger”:
SELECT * FROM EMPLOYEE WHERE FIRST_NAME IN ('Pete', 'Ann', 'Roger');
-
Find all computers that have models whose manufacturer starts with the letter “A”:
SELECT model, speed, hd FROM PC WHERE model IN (SELECT model FROM product WHERE maker STARTING WITH 'A');
SINGULAR
[NOT] SINGULAR (<select_stmt>)
The SINGULAR
predicate takes a subquery as its argument and evaluates it as TRUE
if the subquery returns exactly one row, otherwise the predicate is evaluated as FALSE
.
The subquery may list several output columns since the rows are not returned anyway, they are only tested for (singular) existence.
For brevity, people usually specify ‘SELECT *
’.
The SINGULAR
predicate can return only two values: TRUE
or FALSE
.
Find those employees who have only one project.
SELECT *
FROM employee
WHERE SINGULAR(SELECT *
FROM employee_project ep
WHERE ep.emp_no = employee.emp_no)
4.2.4. Quantified Subquery Predicates
A quantifier is a logical operator that sets the number of objects for which this condition is true. It is not a numeric quantity, but a logical one that connects the condition with the full set of possible objects. Such predicates are based on logical universal and existential quantifiers that are recognised in formal logic.
In subquery expressions, quantified predicates make it possible to compare separate values with the results of subqueries; they have the following common form:
<value expression> <comparison operator> <quantifier> <subquery>
ALL
<value> <op> ALL (<select_stmt>)
When the ALL
quantifier is used, the predicate is TRUE
if every value returned by the subquery satisfies the condition in the predicate of the main query.
Show only those clients whose ratings are higher than the rating of every client in Paris.
SELECT c1.*
FROM Customers c1
WHERE c1.rating > ALL
(SELECT c2.rating
FROM Customers c2
WHERE c2.city = 'Paris')
If the subquery returns an empty set, the predicate is Nevertheless, it aligns perfectly with formal logic: if the set is empty, the predicate is true for every row in the set. |
ANY
and SOME
<value> <op> {ANY | SOME} (<select_stmt>)
The quantifiers ANY
and SOME
are identical in their behaviour.
Both are specified in the SQL standard, and they be used interchangeably to improve the readability of operators.
When the ANY
or the SOME
quantifier is used, the predicate is TRUE
if any of the values returned by the subquery satisfies the condition in the predicate of the main query.
If the subquery returns no rows at all, the predicate is automatically considered as FALSE
.
Show only those clients whose ratings are higher than those of one or more clients in Rome.
SELECT *
FROM Customers
WHERE rating > ANY
(SELECT rating
FROM Customers
WHERE city = 'Rome')
5. Data Definition (DDL) Statements
DDL is the data definition language subset of Firebird’s SQL language. DDL statements are used to create, alter and drop database objects. When a DDL statement is committed, the metadata for the object are created, altered or deleted.
5.1. DATABASE
This section describes how to create a database, connect to an existing database, alter the file structure of a database and how to drop a database. It also shows two methods to back up a database and how to switch the database to the “copy-safe” mode for performing an external backup safely.
5.1.1. CREATE DATABASE
Creates a new database
DSQL, ESQL
CREATE {DATABASE | SCHEMA} <filespec> [<db_initial_option> [<db_initial_option> ...]] [<db_config_option> [<db_config_option> ...]] <db_initial_option> ::= USER username | PASSWORD 'password' | ROLE rolename | PAGE_SIZE [=] size | LENGTH [=] num [PAGE[S]] | SET NAMES 'charset' <db_config_option> ::= DEFAULT CHARACTER SET default_charset [COLLATION collation] -- not supported in ESQL | <sec_file> | DIFFERENCE FILE 'diff_file' -- not supported in ESQL <filespec> ::= "'" [server_spec]{filepath | db_alias} "'" <server_spec> ::= host[/{port | service}]: | <protocol>://[host[:{port | service}]/] <protocol> ::= inet | inet4 | inet6 | xnet <sec_file> ::= FILE 'filepath' [LENGTH [=] num [PAGE[S]] [STARTING [AT [PAGE]] pagenum]
Each db_initial_option and db_config_option can occur at most once, except sec_file, which can occur zero or more times.
Parameter | Description |
---|---|
filespec |
File specification for primary database file |
server_spec |
Remote server specification. Some protocols require specifying a hostname. Optionally includes a port number or service name. Required if the database is created on a remote server. |
filepath |
Full path and file name including its extension. The file name must be specified according to the rules of the platform file system being used. |
db_alias |
Database alias previously created in the |
host |
Host name or IP address of the server where the database is to be created |
port |
The port number where the remote server is listening (parameter RemoteServicePort in |
service |
Service name.
Must match the parameter value of RemoteServiceName in |
username |
Username of the owner of the new database.
The maximum length is 63 characters.
The username can optionally be enclosed in single or double quotes.
When a username is enclosed in double quotes, it is case-sensitive following the rules for quoted identifiers.
When enclosed in single quotes, it behaves as if the value was specified without quotes.
The user must be an administrator or have the |
password |
Password of the user as the database owner.
When using the |
rolename |
The name of the role whose rights should be taken into account when creating a database. The role name can be enclosed in single or double quotes. When the role name is enclosed in double quotes, it is case-sensitive following the rules for quoted identifiers. When enclosed in single quotes, it behaves as if the value was specified without quotes. |
size |
Page size for the database, in bytes. Possible values are 4096, 8192, 16384 and 32768. The default page size is 8192. |
num |
Maximum size of the primary database file, or a secondary file, in pages |
charset |
Specifies the character set of the connection available to a client connecting after the database is successfully created. Single quotes are required. |
default_charset |
Specifies the default character set for string data types |
collation |
Default collation for the default character set |
sec_file |
File specification for a secondary file |
pagenum |
Starting page number for a secondary database file |
diff_file |
File path and name for DIFFERENCE files (.delta files) for backup mode |
The CREATE DATABASE
statement creates a new database.
You can use CREATE DATABASE
or CREATE SCHEMA
.
They are synonymous, but we recommend to always use CREATE DATABASE
as this may change in a future version of Firebird.
A database consists of one or more files. The first (main) file is called the primary file, subsequent files are called secondary file(s).
Multi-file Databases
Nowadays, multi-file databases are considered an anachronism. It made sense to use multi-file databases on old file systems where the size of any file is limited. For instance, you could not create a file larger than 4 GB on FAT32. |
The primary file specification is the name of the database file and its extension with the full path to it according to the rules of the OS platform file system being used. The database file must not exist at the moment the database is being created. If it does exist, you will get an error message, and the database will not be created.
If the full path to the database is not specified, the database will be created in one of the system directories. The particular directory depends on the operating system. For this reason, unless you have a strong reason to prefer that situation, always specify either the absolute path or an alias, when creating a database.
Using a Database Alias
You can use aliases instead of the full path to the primary database file.
Aliases are defined in the databases.conf
file in the following format:
alias = filepath
Executing a If necessary, you can always fall back to isql to create a database. |
Creating a Database on a Remote Server
If you create a database on a remote server, you need to specify the remote server specification. The remote server specification depends on the protocol being used. If you use the TCP/IP protocol to create a database, the primary file specification should look like this:
host[/{port|service}]:{filepath | db_alias}
Firebird also has a unified URL-like syntax for the remote server specification. In this syntax, the first part specifies the name of the protocol, then a host name or IP address, port number, and path of the primary database file, or an alias.
The following values can be specified as the protocol:
- inet
-
TCP/IP (first tries to connect using the IPv6 protocol, if it fails, then IPv4)
- inet4
-
TCP/IP v4
- inet6
-
TCP/IP v6
- xnet
-
local protocol (does not include a host, port and service name)
<protocol>://[host[:{port | service}]/]{filepath | db_alias}
Optional Parameters for CREATE DATABASE
USER
andPASSWORD
-
The username and the password of an existing user in the security database (
security5.fdb
or whatever is configured in the SecurityDatabase configuration). You do not have to specify the username and password if theISC_USER
andISC_PASSWORD
environment variables are set. The user specified in the process of creating the database will be its owner. This will be important when considering database and object privileges. ROLE
-
The name of the role (usually
RDB$ADMIN
), which will be taken into account when creating the database. The role must be assigned to the user in the applicable security database. PAGE_SIZE
-
The desired database page size. This size will be set for the primary file and all secondary files of the database. If you specify the database page size less than 4,096, it will be automatically rounded up to 4,096. Other values not equal to either 4,096, 8,192, 16,384 or 32,768 will be changed to the closest smaller supported value. If the database page size is not specified, the default value of 8,192 is used.
Bigger Isn’t Always Better.Larger page sizes can fit more records on a single page, have wider indexes, and more indexes, but they will also waste more space for blobs (compare the wasted space of a 3KB blob on page size 4096 with one on 32768: +/- 1KB vs +/- 29KB), and increase memory consumption of the page cache.
LENGTH
-
The maximum size of the primary or secondary database file, in pages. When a database is created, its primary and secondary files will occupy the minimum number of pages necessary to store the system data, regardless of the value specified in the
LENGTH
clause. TheLENGTH
value does not affect the size of the only (or last, in a multi-file database) file. The file will keep increasing its size automatically when necessary. SET NAMES
-
The character set of the connection available after the database is successfully created. The character set
NONE
is used by default. Notice that the character set should be enclosed in a pair of apostrophes (single quotes). DEFAULT CHARACTER SET
-
The default character set for creating data structures of string data types. Character sets are used for
CHAR
,VARCHAR
andBLOB SUB_TYPE TEXT
data types. The character setNONE
is used by default. It is also possible to specify the defaultCOLLATION
for the default character set, making that collation the default for the default character set. The default will be used for the entire database except where an alternative character set, with or without a specified collation, is used explicitly for a field, domain, variable, cast expression, etc. STARTING AT
-
The database page number at which the next secondary database file should start. When the previous file is fully filled with data according to the specified page number, the system will start adding new data to the next database file.
DIFFERENCE FILE
-
The path and name for the file delta that stores any mutations to the database file after it has been switched to the “copy-safe” mode by the
ALTER DATABASE BEGIN BACKUP
statement. For the detailed description of this clause, seeALTER DATABASE
.
Specifying the Database Dialect
Databases are created in Dialect 3 by default.
For the database to be created in Dialect 1, you will need to execute the statement SET SQL DIALECT 1
from script or the client application, e.g. in isql, before the CREATE DATABASE
statement.
Who Can Create a Database
The CREATE DATABASE
statement can be executed by:
-
Users with the
CREATE DATABASE
privilege
Examples Using CREATE DATABASE
-
Creating a database in Windows, located on disk D with a page size of 4,096. The owner of the database will be the user wizard. The database will be in Dialect 1, and will use
WIN1251
as its default character set.SET SQL DIALECT 1; CREATE DATABASE 'D:\test.fdb' USER 'wizard' PASSWORD 'player' PAGE_SIZE = 4096 DEFAULT CHARACTER SET WIN1251;
-
Creating a database in the Linux operating system with a page size of 8,192 (default). The owner of the database will be the user wizard. The database will be in Dialect 3 and will use
UTF8
as its default character set, withUNICODE_CI_AI
as the default collation.CREATE DATABASE '/home/firebird/test.fdb' USER 'wizard' PASSWORD 'player' DEFAULT CHARACTER SET UTF8 COLLATION UNICODE_CI_AI;
-
Creating a database on the remote server “baseserver” with the path specified in the alias “test” that has been defined previously in the file
databases.conf
. The TCP/IP protocol is used. The owner of the database will be the user wizard. The database will be in Dialect 3 and will useUTF8
as its default character set.CREATE DATABASE 'baseserver:test' USER 'wizard' PASSWORD 'player' DEFAULT CHARACTER SET UTF8;
-
Creating a database in Dialect 3 with
UTF8
as its default character set. The primary file will contain up to 10,000 pages with a page size of 8,192. As soon as the primary file has reached the maximum number of pages, Firebird will start allocating pages to the secondary filetest.fdb2
. If that file is filled up to its maximum as well,test.fdb3
becomes the recipient of all new page allocations. As the last file, it has no page limit imposed on it by Firebird. New allocations will continue for as long as the file system allows it or until the storage device runs out of free space. If aLENGTH
parameter were supplied for this last file, it would be ignored.SET SQL DIALECT 3; CREATE DATABASE 'baseserver:D:\test.fdb' USER 'wizard' PASSWORD 'player' PAGE_SIZE = 8192 DEFAULT CHARACTER SET UTF8 FILE 'D:\test.fdb2' STARTING AT PAGE 10001 FILE 'D:\test.fdb3' STARTING AT PAGE 20001;
-
Creating a database in Dialect 3 with
UTF8
as its default character set. The primary file will contain up to 10,000 pages with a page size of 8,192. As far as file size and the use of secondary files are concerned, this database will behave exactly like the one in the previous example.SET SQL DIALECT 3; CREATE DATABASE 'baseserver:D:\test.fdb' USER 'wizard' PASSWORD 'player' PAGE_SIZE = 8192 LENGTH 10000 PAGES DEFAULT CHARACTER SET UTF8 FILE 'D:\test.fdb2' FILE 'D:\test.fdb3' STARTING AT PAGE 20001;
5.1.2. ALTER DATABASE
Alters the file organisation of a database, toggles its “copy-safe” state, manages encryption, and other database-wide configuration
DSQL, ESQL — limited feature set
ALTER {DATABASE | SCHEMA} <alter_db_option> [<alter_db_option> ...] <alter_db_option> :== <add_sec_clause> | {ADD DIFFERENCE FILE 'diff_file' | DROP DIFFERENCE FILE} | {BEGIN | END} BACKUP | SET DEFAULT CHARACTER SET charset | {ENCRYPT WITH plugin_name [KEY key_name] | DECRYPT} | SET LINGER TO linger_duration | DROP LINGER | SET DEFAULT SQL SECURITY {INVOKER | DEFINER} | {ENABLE | DISABLE} PUBLICATION | INCLUDE <pub_table_filter> TO PUBLICATION | EXCLUDE <pub_table_filter> FROM PUBLICATION <add_sec_clause> ::= ADD <sec_file> [<sec_file> ...] <sec_file> ::= FILE 'filepath' [STARTING [AT [PAGE]] pagenum] [LENGTH [=] num [PAGE[S]] <pub_table_filter> ::= ALL | TABLE table_name [, table_name ...]
Multiple files can be added in one
Multiple occurrences of add_sec_clause ( |
Parameter | Description |
---|---|
add_sec_clause |
Adding a secondary database file |
sec_file |
File specification for secondary file |
filepath |
Full path and file name of the delta file or secondary database file |
pagenum |
Page number from which the secondary database file is to start |
num |
Maximum size of the secondary file in pages |
diff_file |
File path and name of the .delta file (difference file) |
charset |
New default character set of the database |
linger_duration |
Duration of linger delay in seconds; must be greater than or equal to 0 (zero) |
plugin_name |
The name of the encryption plugin |
key_name |
The name of the encryption key |
pub_table_filter |
Filter of tables to include to or exclude from publication |
table_name |
Name (identifier) of a table |
The ALTER DATABASE
statement can:
-
add secondary files to a database
-
switch a single-file database into and out of the “copy-safe” mode (DSQL only)
-
set or unset the path and name of the delta file for physical backups (DSQL only)
|
Who Can Alter the Database
The ALTER DATABASE
statement can be executed by:
-
Users with the
ALTER DATABASE
privilege
Parameters for ALTER DATABASE
ADD (FILE)
-
Adds secondary files to the database. It is necessary to specify the full path to the file and the name of the secondary file. The description for the secondary file is similar to the one given for the
CREATE DATABASE
statement. ADD DIFFERENCE FILE
-
Specifies the path and name of the difference file (or, delta file) that stores any mutations to the database whenever it is switched to the “copy-safe” mode. This clause does not add a file, but it configures name and path of the delta file when the database is in “copy-safe” mode. To change the existing setting, you should delete the previously specified description of the delta file using the
DROP DIFFERENCE FILE
clause before specifying the new description of the delta file. If the path and name of the delta file are not configured, the file will have the same path and name as the database, but with the.delta
file extension.If only a filename is specified, the delta file will be created in the current directory of the server. On Windows, this will be the system directory — a very unwise location to store volatile user files and contrary to Windows file system rules.
DROP DIFFERENCE FILE
-
Deletes the description (path and name) of the difference file specified previously in the
ADD DIFFERENCE FILE
clause. This does not delete a file, butDROP DIFFERENCE FILE
clears (resets) the path and name of the delta file from the database header. Next time the database is switched to the “copy-safe” mode, the default values will be used (i.e. the same path and name as those of the database, but with the.delta
extension). BEGIN BACKUP
-
Switches the database to the “copy-safe” mode.
ALTER DATABASE
with this clause freezes the main database file, making it possible to back it up safely using file system tools, even if users are connected and performing operations with data. Until the backup state of the database is reverted to NORMAL, all changes made to the database will be written to the delta (difference) file.Despite its name, the
ALTER DATABASE BEGIN BACKUP
statement does not start a backup process, but only freezes the database, to create the conditions for doing a task that requires the database file to be read-only temporarily. END BACKUP
-
Switches the database from the “copy-safe” mode to the normal mode. A statement with this clause merges the difference file with the main database file and restores the normal operation of the database. Once the
END BACKUP
process starts, the conditions no longer exist for creating safe backups by means of file system tools.Use of
BEGIN BACKUP
andEND BACKUP
and copying the database files with filesystem tools, is not safe with multi-file databases! Use this method only on single-file databases.Making a safe backup with the gbak utility remains possible at all times, although it is not recommended running gbak while the database is in LOCKED or MERGE state.
SET DEFAULT CHARACTER SET
-
Changes the default character set of the database. This change does not affect existing data or columns. The new default character set will only be used in subsequent DDL commands. To modify the default collation, use
ALTER CHARACTER SET
on the default character set of the database. ENCRYPT WITH
-
See Encrypting a Database in the Security chapter.
DECRYPT
-
See Decrypting a Database in the Security chapter.
SET LINGER TO
-
Sets the linger-delay. The linger-delay applies only to Firebird SuperServer, and is the number of seconds the server keeps a database file (and its caches) open after the last connection to that database was closed. This can help to improve performance at low cost, when the database is opened and closed frequently, by keeping resources “warm” for the next connection.
This mode can be useful for web applications — without a connection pool — where connections to the database usually “live” for a very short time.
The
SET LINGER TO
andDROP LINGER
clauses can be combined in a single statement, but the last clause “wins”. For example,ALTER DATABASE SET LINGER TO 5 DROP LINGER
will set the linger-delay to 0 (no linger), whileALTER DATABASE DROP LINGER SET LINGER to 5
will set the linger-delay to 5 seconds. DROP LINGER
-
Drops the linger-delay (sets it to zero). Using
DROP LINGER
is equivalent to usingSET LINGER TO 0
.Dropping
LINGER
is not an ideal solution for the occasional need to turn it off for once-only operations where the server needs a forced shutdown. The gfix utility now has the-NoLinger
switch, which will close the specified database immediately after the last attachment is gone, regardless of theLINGER
setting in the database. TheLINGER
setting is retained and works normally the next time.The same one-off override is also available through the Services API, using the tag
isc_spb_prp_nolinger
, e.g. (in one line):fbsvcmgr host:service_mgr user sysdba password xxx action_properties dbname employee prp_nolinger
The
DROP LINGER
andSET LINGER TO
clauses can be combined in a single statement, but the last clause “wins”. SET DEFAULT SQL SECURITY
-
Specifies the default
SQL SECURITY
option to apply at runtime for objects without the SQL Security property set. See also SQL Security in chapter Security. ENABLE PUBLICATION
-
Enables publication of this database for replication. Replication begins (or continues) with the next transaction started after this transaction commits.
DISABLE PUBLICATION
-
Enables publication of this database for replication. Replication is disabled immediately after commit.
EXCLUDE … FROM PUBLICATION
-
Excludes tables from publication. If the
INCLUDE ALL TO PUBLICATION
clause is used, all tables created afterward will also be replicated, unless overridden explicitly in theCREATE TABLE
statement. INCLUDE … TO PUBLICATION
-
Includes tables to publication. If the
INCLUDE ALL TO PUBLICATION
clause is used, all tables created afterward will also be replicated, unless overridden explicitly in theCREATE TABLE
statement.
Replication
|
Examples of ALTER DATABASE
Usage
-
Adding a secondary file to the database. As soon as 30000 pages are filled in the previous primary or secondary file, the Firebird engine will start adding data to the secondary file
test4.fdb
.ALTER DATABASE ADD FILE 'D:\test4.fdb' STARTING AT PAGE 30001;
-
Specifying the path and name of the delta file:
ALTER DATABASE ADD DIFFERENCE FILE 'D:\test.diff';
-
Deleting the description of the delta file:
ALTER DATABASE DROP DIFFERENCE FILE;
-
Switching the database to the “copy-safe” mode:
ALTER DATABASE BEGIN BACKUP;
-
Switching the database back from the “copy-safe” mode to the normal operation mode:
ALTER DATABASE END BACKUP;
-
Changing the default character set for a database to
WIN1251
ALTER DATABASE SET DEFAULT CHARACTER SET WIN1252;
-
Setting a linger-delay of 30 seconds
ALTER DATABASE SET LINGER TO 30;
-
Encrypting the database with a plugin called
DbCrypt
ALTER DATABASE ENCRYPT WITH DbCrypt;
-
Decrypting the database
ALTER DATABASE DECRYPT;
5.1.3. DROP DATABASE
Drops (deletes) the database of the current connection
DSQL, ESQL
DROP DATABASE
The DROP DATABASE
statement deletes the current database.
Before deleting a database, you have to connect to it.
The statement deletes the primary file, all secondary files and all shadow files.
Contrary to |
Who Can Drop a Database
The DROP DATABASE
statement can be executed by:
-
Users with the
DROP DATABASE
privilege
5.2. SHADOW
A shadow is an exact, page-by-page copy of a database. Once a shadow is created, all changes made in the database are immediately reflected in the shadow. If the primary database file becomes unavailable for some reason, the DBMS will switch to the shadow.
This section describes how to create and delete shadow files.
5.2.1. CREATE SHADOW
Creates a shadow file for the current database
DSQL, ESQL
CREATE SHADOW <sh_num> [{AUTO | MANUAL}] [CONDITIONAL] 'filepath' [LENGTH [=] num [PAGE[S]]] [<secondary_file> ...] <secondary_file> ::= FILE 'filepath' [STARTING [AT [PAGE]] pagenum] [LENGTH [=] num [PAGE[S]]]
Parameter | Description |
---|---|
sh_num |
Shadow number — a positive number identifying the shadow set |
filepath |
The name of the shadow file and the path to it, in accord with the rules of the operating system |
num |
Maximum shadow size, in pages |
secondary_file |
Secondary file specification |
page_num |
The number of the page at which the secondary shadow file should start |
The CREATE SHADOW
statement creates a new shadow.
The shadow starts duplicating the database right at the moment it is created.
It is not possible for a user to connect to a shadow.
Like a database, a shadow may be multi-file. The number and size of a shadow’s files are not related to the number and size of the files of the shadowed database.
The page size for shadow files is set to be equal to the database page size and cannot be changed.
If a calamity occurs involving the original database, the system converts the shadow to a copy of the database and switches to it.
The shadow is then unavailable.
What happens next depends on the MODE
option.
AUTO | MANUAL
Modes
When a shadow is converted to a database, it becomes unavailable. A shadow might alternatively become unavailable because someone accidentally deletes its file, or the disk space where the shadow files are stored is exhausted or is itself damaged.
-
If the AUTO mode is selected (the default value), shadowing ceases automatically, all references to it are deleted from the database header, and the database continues functioning normally.
If the
CONDITIONAL
option was set, the system will attempt to create a new shadow to replace the lost one. It does not always succeed, however, and a new one may need to be created manually. -
If the MANUAL mode attribute is set when the shadow becomes unavailable, all attempts to connect to the database and to query it will produce error messages. The database will remain inaccessible until either the shadow again becomes available, or the database administrator deletes it using the
DROP SHADOW
statement.MANUAL
should be selected if continuous shadowing is more important than uninterrupted operation of the database.
Options for CREATE SHADOW
LENGTH
-
Specifies the maximum size of the primary or secondary shadow file in pages. The
LENGTH
value does not affect the size of the only shadow file, nor the last if it is a set. The last (or only) file will keep automatically growing as long as it is necessary. STARTING AT
-
Specifies the shadow page number at which the next shadow file should start. The system will start adding new data to the next shadow file when the previous file is filled with data up to the specified page number.
You can verify the sizes, names and location of the shadow files by connecting to the database using isql and running the command |
Who Can Create a Shadow
The CREATE SHADOW
statement can be executed by:
-
Users with the
ALTER DATABASE
privilege
5.2.2. DROP SHADOW
Drops (deletes) a shadow file from the current database
DSQL, ESQL
DROP SHADOW sh_num [{DELETE | PRESERVE} FILE]
Parameter | Description |
---|---|
sh_num |
Shadow number — a positive number identifying the shadow set |
The DROP SHADOW
statement deletes the specified shadow for the current database.
When a shadow is dropped, all files related to it are deleted and shadowing to the specified sh_num ceases.
The optional DELETE FILE
clause makes this behaviour explicit.
On the contrary, the PRESERVE FILE
clause will remove the shadow from the database, but the file itself will not be deleted.
5.3. DOMAIN
DOMAIN
is one of the object types in a relational database.
A domain is created as a specific data type with attributes attached to it (think of attributes like length, precision or scale, nullability, check constraints).
Once a domain has been defined in the database, it can be reused repeatedly to define table columns, PSQL arguments and PSQL local variables.
Those objects inherit all attributes of the domain.
Some attributes can be overridden when the new object is defined, if required.
This section describes the syntax of statements used to create, alter and drop domains. A detailed description of domains and their usage can be found in Custom Data Types — Domains.
5.3.1. CREATE DOMAIN
Creates a new domain
DSQL, ESQL
CREATE DOMAIN name [AS] <datatype> [DEFAULT {<literal> | NULL | <context_var>}] [NOT NULL] [CHECK (<dom_condition>)] [COLLATE collation_name] <datatype> ::= <scalar_datatype> | <blob_datatype> | <array_datatype> <scalar_datatype> ::= !! See Scalar Data Types Syntax !! <blob_datatype> ::= !! See BLOB Data Types Syntax !! <array_datatype> ::= !! See Array Data Types Syntax !! <dom_condition> ::= <val> <operator> <val> | <val> [NOT] BETWEEN <val> AND <val> | <val> [NOT] IN ({<val> [, <val> ...] | <select_list>}) | <val> IS [NOT] NULL | <val> IS [NOT] DISTINCT FROM <val> | <val> [NOT] CONTAINING <val> | <val> [NOT] STARTING [WITH] <val> | <val> [NOT] LIKE <val> [ESCAPE <val>] | <val> [NOT] SIMILAR TO <val> [ESCAPE <val>] | <val> <operator> {ALL | SOME | ANY} (<select_list>) | [NOT] EXISTS (<select_expr>) | [NOT] SINGULAR (<select_expr>) | (<dom_condition>) | NOT <dom_condition> | <dom_condition> OR <dom_condition> | <dom_condition> AND <dom_condition> <operator> ::= <> | != | ^= | ~= | = | < | > | <= | >= | !< | ^< | ~< | !> | ^> | ~> <val> ::= VALUE | <literal> | <context_var> | <expression> | NULL | NEXT VALUE FOR genname | GEN_ID(genname, <val>) | CAST(<val> AS <cast_type>) | (<select_one>) | func([<val> [, <val> ...]]) <cast_type> ::= <domain_or_non_array_type> | <array_datatype> <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !!
Parameter | Description |
---|---|
name |
Domain name. The maximum length is 63 characters |
datatype |
SQL data type |
literal |
A literal value that is compatible with datatype |
context_var |
Any context variable whose type is compatible with datatype |
dom_condition |
Domain condition |
collation_name |
Name of a collation that is valid for charset_name, if it is supplied with datatype or, otherwise, is valid for the default character set of the database |
select_one |
A scalar |
select_list |
A |
select_expr |
A |
expression |
An expression resolving to a value that is compatible with datatype |
genname |
Sequence (generator) name |
func |
Internal function or UDF |
The CREATE DOMAIN
statement creates a new domain.
Any SQL data type can be specified as the domain type.
Type-specific Details
- Array Types
-
-
If the domain is to be an array, the base type can be any SQL data type except
BLOB
and array. -
The dimensions of the array are specified between square brackets.
-
For each array dimension, one or two integer numbers define the lower and upper boundaries of its index range:
-
By default, arrays are 1-based. The lower boundary is implicit and only the upper boundary need be specified. A single number smaller than 1 defines the range num..1 and a number greater than 1 defines the range 1..num.
-
Two numbers separated by a colon (‘
:
’) and optional whitespace, the second greater than the first, can be used to define the range explicitly. One or both boundaries can be less than zero, as long as the upper boundary is greater than the lower.
-
-
When the array has multiple dimensions, the range definitions for each dimension must be separated by commas and optional whitespace.
-
Subscripts are validated only if an array actually exists. It means that no error messages regarding invalid subscripts will be returned if selecting a specific element returns nothing or if an array field is
NULL
.
-
- String Types
-
You can use the
CHARACTER SET
clause to specify the character set for theCHAR
,VARCHAR
andBLOB
(SUB_TYPE TEXT
) types. If the character set is not specified, the character set specified asDEFAULT CHARACTER SET
of the database will be used. If the database has no default character set, the character setNONE
is applied by default when you create a character domain.With character set
NONE
, character data are stored and retrieved the way they were submitted. Data in any encoding can be added to a column based on such a domain, but it is impossible to add this data to a column with a different encoding. Because no transliteration is performed between the source and destination encodings, errors may result. DEFAULT
Clause-
The optional
DEFAULT
clause allows you to specify a default value for the domain. This value will be added to the table column that inherits this domain when theINSERT
statement is executed, if no value is specified for it in the DML statement. Local variables and arguments in PSQL modules that reference this domain will be initialized with the default value. For the default value, use a literal of a compatible type or a context variable of a compatible type. NOT NULL
Constraint-
Columns and variables based on a domain with the
NOT NULL
constraint will be prevented from being written asNULL
, i.e. a value is required.When creating a domain, take care to avoid specifying limitations that would contradict one another. For instance,
NOT NULL
andDEFAULT NULL
are contradictory. CHECK
Constraint(s)-
The optional
CHECK
clause specifies constraints for the domain. A domain constraint specifies conditions that must be satisfied by the values of table columns or variables that inherit from the domain. A condition must be enclosed in parentheses. A condition is a logical expression (also called a predicate) that can return the Boolean resultsTRUE
,FALSE
andUNKNOWN
. A condition is considered satisfied if the predicate returns the valueTRUE
or “unknown value” (equivalent toNULL
). If the predicate returnsFALSE
, the condition for acceptance is not met. VALUE
Keyword-
The keyword
VALUE
in a domain constraint substitutes for the table column that is based on this domain or for a variable in a PSQL module. It contains the value assigned to the variable or the table column.VALUE
can be used anywhere in theCHECK
constraint, though it is usually used in the left part of the condition. COLLATE
-
The optional
COLLATE
clause allows you to specify the collation if the domain is based on one of the string data types, includingBLOB
s with text subtypes. If no collation is specified, the collation will be the one that is default for the specified character set at the time the domain is created.
Who Can Create a Domain
The CREATE DOMAIN
statement can be executed by:
-
Users with the
CREATE DOMAIN
privilege
CREATE DOMAIN
Examples
-
Creating a domain that can take values greater than 1,000, with a default value of 10,000.
CREATE DOMAIN CUSTNO AS INTEGER DEFAULT 10000 CHECK (VALUE > 1000);
-
Creating a domain that can take the values 'Yes' and 'No' in the default character set specified during the creation of the database.
CREATE DOMAIN D_BOOLEAN AS CHAR(3) CHECK (VALUE IN ('Yes', 'No'));
-
Creating a domain with the
UTF8
character set and theUNICODE_CI_AI
collation.CREATE DOMAIN FIRSTNAME AS VARCHAR(30) CHARACTER SET UTF8 COLLATE UNICODE_CI_AI;
-
Creating a domain of the
DATE
type that will not acceptNULL
and uses the current date as the default value.CREATE DOMAIN D_DATE AS DATE DEFAULT CURRENT_DATE NOT NULL;
-
Creating a domain defined as an array of 2 elements of the
NUMERIC(18, 3)
type. The starting array index is 1.CREATE DOMAIN D_POINT AS NUMERIC(18, 3) [2];
Domains defined over an array type may be used only to define table columns. You cannot use array domains to define local variables in PSQL modules.
-
Creating a domain whose elements can be only country codes defined in the
COUNTRY
table.CREATE DOMAIN D_COUNTRYCODE AS CHAR(3) CHECK (EXISTS(SELECT * FROM COUNTRY WHERE COUNTRYCODE = VALUE));
The example is given only to show the possibility of using predicates with queries in the domain test condition. It is not recommended to create this style of domain in practice unless the lookup table contains data that are never deleted.
5.3.2. ALTER DOMAIN
Alters the attributes of a domain or renames a domain
DSQL, ESQL
ALTER DOMAIN domain_name
[TO new_name]
[TYPE <datatype>]
[{SET DEFAULT {<literal> | NULL | <context_var>} | DROP DEFAULT}]
[{SET | DROP} NOT NULL]
[{ADD [CONSTRAINT] CHECK (<dom_condition>) | DROP CONSTRAINT}]
<datatype> ::=
<scalar_datatype> | <blob_datatype>
<scalar_datatype> ::=
!! See Scalar Data Types Syntax !!
<blob_datatype> ::=
!! See BLOB Data Types Syntax !!
!! See also CREATE DOMAIN
Syntax !!
Parameter | Description |
---|---|
new_name |
New name for domain. The maximum length is 63 characters |
literal |
A literal value that is compatible with datatype |
context_var |
Any context variable whose type is compatible with datatype |
The ALTER DOMAIN
statement enables changes to the current attributes of a domain, including its name.
You can make any number of domain alterations in one ALTER DOMAIN
statement.
ALTER DOMAIN
clauses
TO name
-
Renames the domain, as long as there are no dependencies on the domain, i.e. table columns, local variables or procedure arguments referencing it.
SET DEFAULT
-
Sets a new default value for the domain, replacing any existing default.
DROP DEFAULT
-
Deletes a previously specified default value and replace it with
NULL
. SET NOT NULL
-
Adds a
NOT NULL
constraint to the domain; columns or parameters of this domain will be prevented from being written asNULL
, i.e. a value is required.Adding a
NOT NULL
constraint to an existing domain will subject all columns using this domain to a full data validation, so ensure that the columns have no nulls before attempting the change. DROP NOT NULL
-
Drops the
NOT NULL
constraint from the domain.An explicit
NOT NULL
constraint on a column that depends on a domain prevails over the domain. In this situation, the modification of the domain to make it nullable does not propagate to the column. ADD CONSTRAINT CHECK
-
Adds a
CHECK
constraint to the domain. If the domain already has aCHECK
constraint, it has to be deleted first, using anALTER DOMAIN
statement that includes aDROP CONSTRAINT
clause. TYPE
-
Changes the data type of the domain to a different, compatible one. The system will forbid any change to the type that could result in data loss. An example would be if the number of characters in the new type were smaller than in the existing type.
When you alter the attributes of a domain, existing PSQL code may become invalid. For information on how to detect it, read the piece entitled The RDB$VALID_BLR Field in Appendix A. |
What ALTER DOMAIN
Cannot Alter
-
If the domain was declared as an array, it is not possible to change its type or its dimensions; nor can any other type be changed to an array type.
-
The collation cannot be changed without dropping the domain and recreating it with the desired attributes.
Who Can Alter a Domain
The ALTER DOMAIN
statement can be executed by:
-
The owner of the domain
-
Users with the
ALTER ANY DOMAIN
privilege
Domain alterations can be prevented by dependencies from objects to which the user does not have sufficient privileges.
ALTER DOMAIN
Examples
-
Changing the data type to
INTEGER
and setting or changing the default value to 2,000:ALTER DOMAIN CUSTNO TYPE INTEGER SET DEFAULT 2000;
-
Renaming a domain.
ALTER DOMAIN D_BOOLEAN TO D_BOOL;
-
Deleting the default value and adding a constraint for the domain:
ALTER DOMAIN D_DATE DROP DEFAULT ADD CONSTRAINT CHECK (VALUE >= date '01.01.2000');
-
Changing the
CHECK
constraint:ALTER DOMAIN D_DATE DROP CONSTRAINT; ALTER DOMAIN D_DATE ADD CONSTRAINT CHECK (VALUE BETWEEN date '01.01.1900' AND date '31.12.2100');
-
Changing the data type to increase the permitted number of characters:
ALTER DOMAIN FIRSTNAME TYPE VARCHAR(50) CHARACTER SET UTF8;
-
Adding a
NOT NULL
constraint:ALTER DOMAIN FIRSTNAME SET NOT NULL;
-
Removing a
NOT NULL
constraint:ALTER DOMAIN FIRSTNAME DROP NOT NULL;
5.3.3. DROP DOMAIN
Drops an existing domain
DSQL, ESQL
DROP DOMAIN domain_name
The DROP DOMAIN
statement deletes a domain that exists in the database.
It is not possible to delete a domain if it is referenced by any database table columns or used in any PSQL module.
To delete a domain that is in use, all columns in all tables that refer to the domain have to be dropped and all references to the domain have to be removed from PSQL modules.
Who Can Drop a Domain
The DROP DOMAIN
statement can be executed by:
-
The owner of the domain
-
Users with the
DROP ANY DOMAIN
privilege
5.4. TABLE
As a relational DBMS, Firebird stores data in tables. A table is a flat, two-dimensional structure containing any number of rows. Table rows are often called records.
All rows in a table have the same structure and consist of columns. Table columns are often called fields. A table must have at least one column. Each column contains a single type of SQL data.
This section describes how to create, alter and drop tables in a database.
5.4.1. CREATE TABLE
Creates a table
DSQL, ESQL
CREATE [GLOBAL TEMPORARY] TABLE tablename [EXTERNAL [FILE] 'filespec'] (<col_def> [, {<col_def> | <tconstraint>} ...]) [{<table_attrs> | <gtt_table_attrs>}] <col_def> ::= <regular_col_def> | <computed_col_def> | <identity_col_def> <regular_col_def> ::= colname {<datatype> | domainname} [DEFAULT {<literal> | NULL | <context_var>}] [<col_constraint> ...] [COLLATE collation_name] <computed_col_def> ::= colname [{<datatype> | domainname}] {COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>) <identity_col_def> ::= colname {<datatype> | domainname} GENERATED {ALWAYS | BY DEFAULT} AS IDENTITY [(<identity_col_option>...)] [<col_constraint> ...] <identity_col_option> ::= START WITH start_value | INCREMENT [BY] inc_value <datatype> ::= <scalar_datatype> | <blob_datatype> | <array_datatype> <scalar_datatype> ::= !! See Scalar Data Types Syntax !! <blob_datatype> ::= !! See BLOB Data Types Syntax !! <array_datatype> ::= !! See Array Data Types Syntax !! <col_constraint> ::= [CONSTRAINT constr_name] { PRIMARY KEY [<using_index>] | UNIQUE [<using_index>] | REFERENCES other_table [(colname)] [<using_index>] [ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}] [ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}] | CHECK (<check_condition>) | NOT NULL } <tconstraint> ::= [CONSTRAINT constr_name] { PRIMARY KEY (<col_list>) [<using_index>] | UNIQUE (<col_list>) [<using_index>] | FOREIGN KEY (<col_list>) REFERENCES other_table [(<col_list>)] [<using_index>] [ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}] [ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}] | CHECK (<check_condition>) } <col_list> ::= colname [, colname ...] <using_index> ::= USING [ASC[ENDING] | DESC[ENDING]] INDEX indexname <check_condition> ::= <val> <operator> <val> | <val> [NOT] BETWEEN <val> AND <val> | <val> [NOT] IN (<val> [, <val> ...] | <select_list>) | <val> IS [NOT] NULL | <val> IS [NOT] DISTINCT FROM <val> | <val> [NOT] CONTAINING <val> | <val> [NOT] STARTING [WITH] <val> | <val> [NOT] LIKE <val> [ESCAPE <val>] | <val> [NOT] SIMILAR TO <val> [ESCAPE <val>] | <val> <operator> {ALL | SOME | ANY} (<select_list>) | [NOT] EXISTS (<select_expr>) | [NOT] SINGULAR (<select_expr>) | (<check_condition>) | NOT <check_condition> | <check_condition> OR <check_condition> | <check_condition> AND <check_condition> <operator> ::= <> | != | ^= | ~= | = | < | > | <= | >= | !< | ^< | ~< | !> | ^> | ~> <val> ::= colname ['['array_idx [, array_idx ...]']'] | <literal> | <context_var> | <expression> | NULL | NEXT VALUE FOR genname | GEN_ID(genname, <val>) | CAST(<val> AS <cast_type>) | (<select_one>) | func([<val> [, <val> ...]]) <cast_type> ::= <domain_or_non_array_type> | <array_datatype> <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !! <table_attrs> ::= <table_attr> [<table_attr> ...] <table_attr> ::= <sql_security> | {ENABLE | DISABLE} PUBLICATION <sql_security> ::= SQL SECURITY {INVOKER | DEFINER} <gtt_table_attrs> ::= <gtt_table_attr> [gtt_table_attr> ...] <gtt_table_attr> ::= <sql_security> | ON COMMIT {DELETE | PRESERVE} ROWS
Parameter | Description |
---|---|
tablename |
Name (identifier) for the table. The maximum length is 63 characters and must be unique in the database. |
filespec |
File specification (only for external tables). Full file name and path, enclosed in single quotes, correct for the local file system and located on a storage device that is physically connected to Firebird’s host computer. |
colname |
Name (identifier) for a column in the table. The maximum length is 63 characters and must be unique in the table. |
tconstraint |
Table constraint |
table_attrs |
Attributes of a normal table |
gtt_table_attrs |
Attributes of a global temporary table |
datatype |
SQL data type |
domain_name |
Domain name |
start_value |
The initial value of the identity column |
inc_value |
The increment (or step) value of the identity column, default is |
col_constraint |
Column constraint |
constr_name |
The name (identifier) of a constraint. The maximum length is 63 characters. |
other_table |
The name of the table referenced by the foreign key constraint |
other_col |
The name of the column in other_table that is referenced by the foreign key |
literal |
A literal value that is allowed in the given context |
context_var |
Any context variable whose data type is allowed in the given context |
check_condition |
The condition applied to a CHECK constraint, that will resolve as either true, false or |
collation |
Collation |
select_one |
A scalar |
select_list |
A |
select_expr |
A |
expression |
An expression resolving to a value that is allowed in the given context |
genname |
Sequence (generator) name |
func |
Internal function or UDF |
The CREATE TABLE
statement creates a new table.
Its name must be unique among the names of all tables, views, and stored procedures in the database.
A table must contain at least one column that is not computed, and the names of columns must be unique in the table.
A column must have either an explicit SQL data type, the name of a domain whose attributes will be copied for the column, or be defined as COMPUTED BY
an expression (a calculated field).
A table may have any number of table constraints, including none.
Character Columns
You can use the CHARACTER SET
clause to specify the character set for the CHAR
, VARCHAR
and BLOB
(text subtype) types.
If the character set is not specified, the default character set of the database — at time of the creation of the column — will be used.
If the database has no default character set, the NONE
character set is applied.
Data in any encoding can be added to such a column, but it is not possible to add this data to a column with a different encoding.
No transliteration is performed between the source and destination encodings, which may result in errors.
The optional COLLATE
clause allows you to specify the collation for character data types, including BLOB SUB_TYPE TEXT
.
If no collation is specified, the default collation for the specified character set — at time of the creation of the column — is applied.
Setting a DEFAULT
Value
The optional DEFAULT
clause allows you to specify the default value for the table column.
This value will be added to the column when an INSERT
statement is executed and that column was omitted from the INSERT
command or DEFAULT
was used instead of a value expression.
The default value will also be used in UPDATE
when DEFAULT
is used instead of a value expression.
The default value can be a literal of a compatible type, a context variable that is type-compatible with the data type of the column, or NULL
, if the column allows it.
If no default value is explicitly specified, NULL
is implied.
An expression cannot be used as a default value.
Domain-based Columns
To define a column, you can use a previously defined domain.
If the definition of a column is based on a domain, it may contain a new default value, additional CHECK
constraints, and a COLLATE
clause that will override the values specified in the domain definition.
The definition of such a column may contain additional column constraints (for instance, NOT NULL
), if the domain does not have it.
It is not possible to define a domain-based column that is nullable if the domain was defined with the |
Identity Columns (Autoincrement)
Identity columns are defined using the GENERATED {ALWAYS | BY DEFAULT} AS IDENTITY
clause.
The identity column is a column associated with an internal sequence.
Its value is set automatically every time it is not specified in the INSERT
statement, or when the column value is specified as DEFAULT
.
-
The data type of an identity column must be an exact number type with zero scale. Allowed types are
SMALLINT
,INTEGER
,BIGINT
,NUMERIC(p[,0])
andDECIMAL(p[,0])
with p <= 18.-
The
INT128
type and numeric types with a precision higher than 18 are not supported.
-
-
An identity column cannot have a
DEFAULT
orCOMPUTED
value. -
An identity column can be altered to become a regular column.
-
A regular column cannot be altered to become an identity column.
-
Identity columns are implicitly
NOT NULL
(non-nullable), and cannot be made nullable. -
Uniqueness is not enforced automatically. A
UNIQUE
orPRIMARY KEY
constraint is required to guarantee uniqueness. -
The use of other methods of generating key values for identity columns, e.g. by trigger-generator code or by allowing users to change or add them, is discouraged to avoid unexpected key violations.
-
The
INCREMENT
value cannot be zero (0
).
GENERATED ALWAYS
An identity column of type GENERATED ALWAYS
will always generate a column value on insert.
Explicitly inserting a value into a column of this type is not allowed, unless:
-
the specified value is
DEFAULT
; this generates the identity value as normal. -
the
OVERRIDING SYSTEM VALUE
clause is specified in theINSERT
statement; this allows a user value to be inserted; -
the
OVERRIDING USER VALUE
clause is specified in theINSERT
statement; this allows a user specified value to be ignored (though in general it makes more sense to not include the column in theINSERT
).
GENERATED BY DEFAULT
An identity column of type GENERATED BY DEFAULT
will generate a value on insert if no value — other than DEFAULT
— is specified on insert.
When the OVERRIDING USER VALUE
clause is specified in the INSERT
statement, the user-provided value is ignored, and an identity value is generated (as if the column was not included in the insert, or the value DEFAULT
was specified).
START WITH
Option
The optional START WITH
clause allows you to specify an initial value other than 1.
This value is the first value generated when using NEXT VALUE FOR sequence
.
INCREMENT
Option
The optional INCREMENT
clause allows you to specify another non-zero step value than 1.
The SQL standard specifies that if |
Computed Columns
Computed columns can be defined with the COMPUTED [BY]
or GENERATED ALWAYS AS
clause (the SQL standard alternative to COMPUTED [BY]
).
Specifying the data type is optional;
if not specified, the appropriate type will be derived from the expression.
If the data type is explicitly specified for a calculated field, the calculation result is converted to the specified type. This means, for instance, that the result of a numeric expression could be converted to a string.
In a query that selects a computed column, the expression is evaluated for each row of the selected data.
Instead of a computed column, in some cases it makes sense to use a regular column whose value is calculated in triggers for adding and updating data. It may reduce the performance of inserting/updating records, but it will increase the performance of data selection. |
Defining an Array Column
-
If the column is to be an array, the base type can be any SQL data type except
BLOB
and array. -
The dimensions of the array are specified between square brackets.
-
For each array dimension, one or two integer numbers define the lower and upper boundaries of its index range:
-
By default, arrays are 1-based. The lower boundary is implicit and only the upper boundary need be specified. A single number smaller than 1 defines the range num…1 and a number greater than 1 defines the range 1…num.
-
Two numbers separated by a colon (‘
:
’) and optional whitespace, the second greater than the first, can be used to define the range explicitly. One or both boundaries can be less than zero, as long as the upper boundary is greater than the lower.
-
-
When the array has multiple dimensions, the range definitions for each dimension must be separated by commas and optional whitespace.
-
Subscripts are validated only if an array actually exists. It means that no error messages regarding invalid subscripts will be returned if selecting a specific element returns nothing or if an array field is
NULL
.
Constraints
Five types of constraints can be specified. They are:
-
Primary key (
PRIMARY KEY
) -
Unique key (
UNIQUE
) -
Foreign key (
REFERENCES
) -
CHECK
constraint (CHECK
) -
NOT NULL
constraint (NOT NULL
)
Constraints can be specified at column level (“column constraints”) or at table level (“table constraints”).
Table-level constraints are required when keys (unique constraint, primary key, foreign key) consist of multiple columns and when a CHECK
constraint involves other columns in the row besides the column being defined.
The NOT NULL
constraint can only be specified as a column constraint.
Syntax for some types of constraint may differ slightly according to whether the constraint is defined at the column or table level.
-
A column-level constraint is specified during a column definition, after all column attributes except
COLLATION
are specified, and can involve only the column specified in that definition -
A table-level constraints can only be specified after the definitions of the columns used in the constraint.
-
Table-level constraints are a more flexible way to set constraints, since they can cater for constraints involving multiple columns
-
You can mix column-level and table-level constraints in the same
CREATE TABLE
statement
The system automatically creates the corresponding index for a primary key (PRIMARY KEY
), a unique key (UNIQUE
), and a foreign key (REFERENCES
for a column-level constraint, FOREIGN KEY REFERENCES
for table-level).
Names for Constraints and Their Indexes
Constraints and their indexes are named automatically if no name was specified using the CONSTRAINT
clause:
-
The constraint name has the form
INTEG_n
, where n represents one or more digits -
The index name has the form
RDB$PRIMARYn
(for a primary key index),RDB$FOREIGNn
(for a foreign key index) orRDB$n
(for a unique key index).
PRIMARY KEY
The PRIMARY KEY
constraint is built on one or more key columns, where each column has the NOT NULL
constraint specified.
The values across the key columns in any row must be unique.
A table can have only one primary key.
-
A single-column primary key can be defined as a column-level or a table-level constraint
-
A multi-column primary key must be specified as a table-level constraint
The UNIQUE
Constraint
The UNIQUE
constraint defines the requirement of content uniqueness for the values in a key throughout the table.
A table can contain any number of unique key constraints.
As with the primary key, the unique constraint can be multi-column. If so, it must be specified as a table-level constraint.
NULL
in Unique KeysFirebird’s SQL-compliant rules for UNIQUE
constraints allow one or more NULL
s in a column with a UNIQUE
constraint.
This makes it possible to define a UNIQUE
constraint on a column that does not have the NOT NULL
constraint.
For UNIQUE
keys that span multiple columns, the logic is a little complicated:
-
Multiple rows having null in all the columns of the key are allowed
-
Multiple rows having keys with different combinations of nulls and non-null values are allowed
-
Multiple rows having the same key columns null and the rest filled with non-null values are allowed, provided the non-null values differ in at least one column
-
Multiple rows having the same key columns null and the rest filled with non-null values that are the same in every column will violate the constraint
The rules for uniqueness can be summarised thus:
In principle, all nulls are considered distinct.
However, if two rows have exactly the same key columns filled with non-null values, the NULL
columns are ignored and the uniqueness is determined on the non-null columns as though they constituted the entire key.
RECREATE TABLE t( x int, y int, z int, unique(x,y,z));
INSERT INTO t values( NULL, 1, 1 );
INSERT INTO t values( NULL, NULL, 1 );
INSERT INTO t values( NULL, NULL, NULL );
INSERT INTO t values( NULL, NULL, NULL ); -- Permitted
INSERT INTO t values( NULL, NULL, 1 ); -- Not permitted
FOREIGN KEY
A foreign key ensures that the participating column(s) can contain only values that also exist in the referenced column(s) in the master table.
These referenced columns are often called target columns.
They must be the primary key or a unique key in the target table.
They need not have a NOT NULL
constraint defined on them although, if they are the primary key, they will, of course, have that constraint.
The foreign key columns in the referencing table itself do not require a NOT NULL
constraint.
A single-column foreign key can be defined in the column declaration, using the keyword REFERENCES
:
... ,
ARTIFACT_ID INTEGER REFERENCES COLLECTION (ARTIFACT_ID),
The column ARTIFACT_ID
in the example references a column of the same name in the table COLLECTIONS
.
Both single-column and multi-column foreign keys can be defined at the table level. For a multi-column foreign key, the table-level declaration is the only option.
...
CONSTRAINT FK_ARTSOURCE FOREIGN KEY(DEALER_ID, COUNTRY)
REFERENCES DEALER (DEALER_ID, COUNTRY),
Notice that the column names in the referenced (“master”) table may differ from those in the foreign key.
If no target columns are specified, the foreign key automatically references the target table’s primary key.
With the sub-clauses ON UPDATE
and ON DELETE
it is possible to specify an action to be taken on the affected foreign key column(s) when referenced values in the master table are changed:
NO ACTION
-
(the default) — Nothing is done
CASCADE
-
The change in the master table is propagated to the corresponding row(s) in the child table. If a key value changes, the corresponding key in the child records changes to the new value; if the master row is deleted, the child records are deleted.
SET DEFAULT
-
The foreign key columns in the affected rows will be set to their default values as they were when the foreign key constraint was defined.
SET NULL
-
The foreign key columns in the affected rows will be set to
NULL
.
The specified action, or the default NO ACTION
, could cause a foreign key column to become invalid.
For example, it could get a value that is not present in the master table.
Such condition will cause the operation on the master table to fail with an error message.
...
CONSTRAINT FK_ORDERS_CUST
FOREIGN KEY (CUSTOMER) REFERENCES CUSTOMERS (ID)
ON UPDATE CASCADE ON DELETE SET NULL
CHECK
Constraint
The CHECK
constraint defines the condition the values inserted in this column or row must satisfy.
A condition is a logical expression (also called a predicate) that can return the TRUE
, FALSE
and UNKNOWN
values.
A condition is considered satisfied if the predicate returns TRUE
or value UNKNOWN
(equivalent to NULL
).
If the predicate returns FALSE
, the value will not be accepted.
This condition is used for inserting a new row into the table (the INSERT
statement) and for updating the existing value of the table column (the UPDATE
statement) and also for statements where one of these actions may take place (UPDATE OR INSERT
, MERGE
).
A |
CHECK
constraints — whether defined at table level or column level — refer to table columns by their names.
The use of the keyword VALUE
as a placeholder — as in domain CHECK
constraints — is not valid in the context of defining constraints in a table.
with two column-level constraints and one at table-level:
CREATE TABLE PLACES (
...
LAT DECIMAL(9, 6) CHECK (ABS(LAT) <= 90),
LON DECIMAL(9, 6) CHECK (ABS(LON) <= 180),
...
CONSTRAINT CHK_POLES CHECK (ABS(LAT) < 90 OR LON = 0)
);
SQL SECURITY
Clause
The SQL SECURITY
clause specifies the security context for executing functions referenced in computed columns, and check constraints, and the default context used for triggers fired for this table.
When SQL Security is not specified, the default value of the database is applied at runtime.
See also SQL Security in chapter Security.
Replication Management
When the database has been configured using ALTER DATABASE INCLUDE ALL TO PUBLICATION
, new tables will automatically be added for publication, unless overridden using the DISABLE PUBLICATION
clause.
If the database has not been configured for INCLUDE ALL
(or has later been reconfigured using ALTER DATABASE EXCLUDE ALL FROM PUBLICATION
), new tables will not automatically be added for publication.
To include tables for publication, the ENABLE PUBLICATION
clause must be used.
Who Can Create a Table
The CREATE TABLE
statement can be executed by:
-
Users with the
CREATE TABLE
privilege
The user executing the CREATE TABLE
statement becomes the owner of the table.
CREATE TABLE
Examples
-
Creating the
COUNTRY
table with the primary key specified as a column constraint.CREATE TABLE COUNTRY ( COUNTRY COUNTRYNAME NOT NULL PRIMARY KEY, CURRENCY VARCHAR(10) NOT NULL );
-
Creating the
STOCK
table with the named primary key specified at the column level and the named unique key specified at the table level.CREATE TABLE STOCK ( MODEL SMALLINT NOT NULL CONSTRAINT PK_STOCK PRIMARY KEY, MODELNAME CHAR(10) NOT NULL, ITEMID INTEGER NOT NULL, CONSTRAINT MOD_UNIQUE UNIQUE (MODELNAME, ITEMID) );
-
Creating the
JOB
table with a primary key constraint spanning two columns, a foreign key constraint for theCOUNTRY
table and a table-levelCHECK
constraint. The table also contains an array of 5 elements.CREATE TABLE JOB ( JOB_CODE JOBCODE NOT NULL, JOB_GRADE JOBGRADE NOT NULL, JOB_COUNTRY COUNTRYNAME, JOB_TITLE VARCHAR(25) NOT NULL, MIN_SALARY NUMERIC(18, 2) DEFAULT 0 NOT NULL, MAX_SALARY NUMERIC(18, 2) NOT NULL, JOB_REQUIREMENT BLOB SUB_TYPE 1, LANGUAGE_REQ VARCHAR(15) [1:5], PRIMARY KEY (JOB_CODE, JOB_GRADE), FOREIGN KEY (JOB_COUNTRY) REFERENCES COUNTRY (COUNTRY) ON UPDATE CASCADE ON DELETE SET NULL, CONSTRAINT CHK_SALARY CHECK (MIN_SALARY < MAX_SALARY) );
-
Creating the
PROJECT
table with primary, foreign and unique key constraints with custom index names specified with theUSING
clause.CREATE TABLE PROJECT ( PROJ_ID PROJNO NOT NULL, PROJ_NAME VARCHAR(20) NOT NULL UNIQUE USING DESC INDEX IDX_PROJNAME, PROJ_DESC BLOB SUB_TYPE 1, TEAM_LEADER EMPNO, PRODUCT PRODTYPE, CONSTRAINT PK_PROJECT PRIMARY KEY (PROJ_ID) USING INDEX IDX_PROJ_ID, FOREIGN KEY (TEAM_LEADER) REFERENCES EMPLOYEE (EMP_NO) USING INDEX IDX_LEADER );
-
Creating a table with an identity column
create table objects ( id integer generated by default as identity primary key, name varchar(15) ); insert into objects (name) values ('Table'); insert into objects (id, name) values (10, 'Computer'); insert into objects (name) values ('Book'); select * from objects order by id; ID NAME ============ =============== 1 Table 2 Book 10 Computer
-
Creating the
SALARY_HISTORY
table with two computed fields. The first one is declared according to the SQL standard, while the second one is declared according to the traditional declaration of computed fields in Firebird.CREATE TABLE SALARY_HISTORY ( EMP_NO EMPNO NOT NULL, CHANGE_DATE TIMESTAMP DEFAULT 'NOW' NOT NULL, UPDATER_ID VARCHAR(20) NOT NULL, OLD_SALARY SALARY NOT NULL, PERCENT_CHANGE DOUBLE PRECISION DEFAULT 0 NOT NULL, SALARY_CHANGE GENERATED ALWAYS AS (OLD_SALARY * PERCENT_CHANGE / 100), NEW_SALARY COMPUTED BY (OLD_SALARY + OLD_SALARY * PERCENT_CHANGE / 100) );
-
With
DEFINER
set for tablet
, userUS
needs only theSELECT
privilege ont
. If it were set forINVOKER
, the user would also need theEXECUTE
privilege on functionf
.set term ^; create function f() returns int as begin return 3; end^ set term ;^ create table t (i integer, c computed by (i + f())) SQL SECURITY DEFINER; insert into t values (2); grant select on table t to user us; commit; connect 'localhost:/tmp/7.fdb' user us password 'pas'; select * from t;
-
With
DEFINER
set for tabletr
, userUS
needs only theINSERT
privilege ontr
. If it were set forINVOKER
, either the user or the trigger would also need theINSERT
privilege on tablet
. The result would be the same ifSQL SECURITY DEFINER
were specified for triggertr_ins
:create table tr (i integer) SQL SECURITY DEFINER; create table t (i integer); set term ^; create trigger tr_ins for tr after insert as begin insert into t values (NEW.i); end^ set term ;^ grant insert on table tr to user us; commit; connect 'localhost:/tmp/29.fdb' user us password 'pas'; insert into tr values(2);
Global Temporary Tables (GTT)
Global temporary tables have persistent metadata, but their contents are transaction-bound (the default) or connection-bound.
Every transaction or connection has its own private instance of a GTT, isolated from all the others.
Instances are only created if and when the GTT is referenced.
They are destroyed when the transaction ends or on disconnect.
The metadata of a GTT can be modified or removed using ALTER TABLE
and DROP TABLE
, respectively.
CREATE GLOBAL TEMPORARY TABLE tablename (<column_def> [, {<column_def> | <table_constraint>} ...]) [<gtt_table_attrs>] <gtt_table_attrs> ::= <gtt_table_attr> [gtt_table_attr> ...] <gtt_table_attr> ::= <sql_security> | ON COMMIT {DELETE | PRESERVE} ROWS
-
ON COMMIT DELETE ROWS
creates a transaction-level GTT (the default),ON COMMIT PRESERVE ROWS
a connection-level GTT -
The
EXTERNAL [FILE]
clause is not allowed in the definition of a global temporary table
GTTs are writable in read-only transactions. The effect is as follows:
- Read-only transaction in read-write database
-
Writable in both
ON COMMIT PRESERVE ROWS
andON COMMIT DELETE ROWS
- Read-only transaction in read-only database
-
Writable in
ON COMMIT DELETE ROWS
only
Restrictions on GTTs
GTTs can be “dressed up” with all the features of ordinary tables (keys, references, indexes, triggers and so on), but there are a few restrictions:
-
GTTs and regular tables cannot reference one another
-
A connection-bound (“
PRESERVE ROWS
”) GTT cannot reference a transaction-bound (“DELETE ROWS
”) GTT -
Domain constraints cannot reference any GTT
-
The destruction of a GTT instance at the end of its lifecycle does not cause any
BEFORE
/AFTER
delete triggers to fire
In an existing database, it is not always easy to distinguish a regular table from a GTT, or a transaction-level GTT from a connection-level GTT. Use this query to find out what type of table you are looking at:
For an overview of the types of all the relations in the database:
The |
Examples of Global Temporary Tables
-
Creating a connection-scoped global temporary table.
CREATE GLOBAL TEMPORARY TABLE MYCONNGTT ( ID INTEGER NOT NULL PRIMARY KEY, TXT VARCHAR(32), TS TIMESTAMP DEFAULT CURRENT_TIMESTAMP) ON COMMIT PRESERVE ROWS;
-
Creating a transaction-scoped global temporary table that uses a foreign key to reference a connection-scoped global temporary table. The
ON COMMIT
sub-clause is optional becauseDELETE ROWS
is the default.CREATE GLOBAL TEMPORARY TABLE MYTXGTT ( ID INTEGER NOT NULL PRIMARY KEY, PARENT_ID INTEGER NOT NULL REFERENCES MYCONNGTT(ID), TXT VARCHAR(32), TS TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) ON COMMIT DELETE ROWS;
External Tables
The optional EXTERNAL [FILE]
clause specifies that the table is stored outside the database in an external text file of fixed-length records.
The columns of a table stored in an external file can be of any type except BLOB
or ARRAY
, although for most purposes, only columns of CHAR
types would be useful.
All you can do with a table stored in an external file is insert new rows (INSERT
) and query the data (SELECT
).
Updating existing data (UPDATE
) and deleting rows (DELETE
) are not possible.
A file that is defined as an external table must be located on a storage device that is physically present on the machine where the Firebird server runs and, if the parameter ExternalFileAccess in the firebird.conf
configuration file is Restrict
, it must be in one of the directories listed there as the argument for Restrict
.
If the file does not exist yet, Firebird will create it on first access.
The ability to use external files for a table depends on the value set for the ExternalFileAccess parameter in
|
External File Format
The “row” format of the external table is fixed length and binary. There are no field delimiters: both field and row boundaries are determined by maximum sizes, in bytes, of the field definitions. Keep this in mind, both when defining the structure of the external table and when designing an input file for an external table that is to import (or export) data from another application. The ubiquitous CSV format, for example, is of no use as an input file and cannot be generated directly into an external file.
The most useful data type for the columns of external tables is the fixed-length CHAR
type, of suitable lengths for the data they are to carry.
Date and number types are easily cast to and from strings whereas the native data types — binary data — will appear to external applications as unparseable “alphabetti”.
Of course, there are ways to manipulate typed data to generate output files from Firebird that can be read directly as input files to other applications, using stored procedures, with or without employing external tables. Such techniques are beyond the scope of a language reference. Here, we provide guidelines and tips for producing and working with simple text files, since the external table feature is often used as an easy way to produce or read transaction-independent logs that can be studied off-line in a text editor or auditing application.
Generally, external files are more useful if rows are separated by a delimiter, in the form of a “newline” sequence that is recognised by reader applications on the intended platform.
For most contexts on Windows, it is the two-byte 'CRLF' sequence, carriage return (ASCII code decimal 13) and line feed (ASCII code decimal 10).
On POSIX, LF on its own is usual.
There are various ways to populate this delimiter column.
In our example below, it is done by using a BEFORE INSERT
trigger and the internal function ASCII_CHAR
.
For our example, we will define an external log table that might be used by an exception handler in a stored procedure or trigger. The external table is chosen because the messages from any handled exceptions will be retained in the log, even if the transaction that launched the process is eventually rolled back because of another, unhandled exception. For demonstration purposes, it has two data columns, a timestamp and a message. The third column stores the row delimiter:
CREATE TABLE ext_log
EXTERNAL FILE 'd:\externals\log_me.txt' (
stamp CHAR (24),
message CHAR(100),
crlf CHAR(2) -- for a Windows context
);
COMMIT;
Now, a trigger, to write the timestamp and the row delimiter each time a message is written to the file:
SET TERM ^;
CREATE TRIGGER bi_ext_log FOR ext_log
ACTIVE BEFORE INSERT
AS
BEGIN
IF (new.stamp is NULL) then
new.stamp = CAST (CURRENT_TIMESTAMP as CHAR(24));
new.crlf = ASCII_CHAR(13) || ASCII_CHAR(10);
END ^
COMMIT ^
SET TERM ;^
Inserting some records (which could have been done by an exception handler or a fan of Shakespeare):
insert into ext_log (message)
values('Shall I compare thee to a summer''s day?');
insert into ext_log (message)
values('Thou art more lovely and more temperate');
The output:
2015-10-07 15:19:03.4110Shall I compare thee to a summer's day?
2015-10-07 15:19:58.7600Thou art more lovely and more temperate
5.4.2. ALTER TABLE
Alters a table
DSQL, ESQL
ALTER TABLE tablename
<operation> [, <operation> ...]
<operation> ::=
ADD <col_def>
| ADD <tconstraint>
| DROP colname
| DROP CONSTRAINT constr_name
| ALTER [COLUMN] colname <col_mod>
| ALTER SQL SECURITY {INVOKER | DEFINER}
| DROP SQL SECURITY
| {ENABLE | DISABLE} PUBLICATION
<col_mod> ::=
TO newname
| POSITION newpos
| <regular_col_mod>
| <computed_col_mod>
| <identity_col_mod>
<regular_col_mod> ::=
TYPE {<datatype> | domainname}
| SET DEFAULT {<literal> | NULL | <context_var>}
| DROP DEFAULT
| {SET | DROP} NOT NULL
<computed_col_mod> ::=
[TYPE <datatype>] {COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)
<identity_col_mod> ::=
SET GENERATED {ALWAYS | BY DEFAULT} [<identity_mod_option>...]
| <identity_mod_options>...
| DROP IDENTITY
<identity_mod_options> ::=
RESTART [WITH restart_value]
| SET INCREMENT [BY] inc_value
!! See CREATE TABLE
syntax for further rules !!
Parameter | Description |
---|---|
tablename |
Name (identifier) of the table |
operation |
One of the available operations altering the structure of the table |
colname |
Name (identifier) for a column in the table. The maximum length is 63 characters. Must be unique in the table. |
domain_name |
Domain name |
newname |
New name (identifier) for the column. The maximum length is 63 characters. Must be unique in the table. |
newpos |
The new column position (an integer between 1 and the number of columns in the table) |
other_table |
The name of the table referenced by the foreign key constraint |
literal |
A literal value that is allowed in the given context |
context_var |
A context variable whose type is allowed in the given context |
check_condition |
The condition of a |
restart_value |
The first value of the identity column after restart |
inc_value |
The increment (or step) value of the identity column;
zero ( |
The ALTER TABLE
statement changes the structure of an existing table.
With one ALTER TABLE
statement it is possible to perform multiple operations, adding/dropping columns and constraints and also altering column specifications.
Multiple operations in an ALTER TABLE
statement are separated with commas.
Version Count Increments
Some changes in the structure of a table increment the metadata change counter (“version count”) assigned to every table. The number of metadata changes is limited to 255 for each table, or 32,000 for each view. Once the counter reaches this limit, you will not be able to make any further changes to the structure of the table or view without resetting the counter.
The ADD
Clause
With the ADD
clause you can add a new column or a new table constraint.
The syntax for defining the column and the syntax of defining the table constraint correspond with those described for CREATE TABLE
statement.
-
Each time a new column is added, the metadata change counter is increased by one
-
Adding a new table constraint does not increase the metadata change counter
Points to Be Aware of
|
The DROP
Clause
The DROP colname
clause deletes the specified column from the table.
An attempt to drop a column will fail if anything references it.
Consider the following items as sources of potential dependencies:
-
column or table constraints
-
indexes
-
stored procedures, functions and triggers
-
views
-
Each time a column is dropped, the table’s metadata change counter is increased by one.
The DROP CONSTRAINT
Clause
The DROP CONSTRAINT
clause deletes the specified column-level or table-level constraint.
A PRIMARY KEY
or UNIQUE
key constraint cannot be deleted if it is referenced by a FOREIGN KEY
constraint in another table.
It will be necessary to drop that FOREIGN KEY
constraint before attempting to drop the PRIMARY KEY
or UNIQUE
key constraint it references.
-
Deleting a column constraint or a table constraint does not increase the metadata change counter.
The ALTER [COLUMN]
Clause
With the ALTER [COLUMN]
clause, attributes of existing columns can be modified without the need to drop and re-add the column.
Permitted modifications are:
-
change the name (does not affect the metadata change counter)
-
change the data type (increases the metadata change counter by one)
-
change the column position in the column list of the table (does not affect the metadata change counter)
-
delete the default column value (does not affect the metadata change counter)
-
set a default column value or change the existing default (does not affect the metadata change counter)
-
change the type and expression for a computed column (does not affect the metadata change counter)
-
set the
NOT NULL
constraint (does not affect the metadata change counter) -
drop the
NOT NULL
constraint (does not affect the metadata change counter) -
change the type of an identity column, or change an identity column to a regular column
-
restart an identity column
-
change the increment of an identity column
Renaming a Column: the TO
Clause
The TO
keyword with a new identifier renames an existing column.
The table must not have an existing column that has the same identifier.
It will not be possible to change the name of a column that is included in any constraint: primary key, unique key, foreign key, or CHECK
constraints of the table.
Renaming a column will also be disallowed if the column is used in any stored PSQL module or view.
Changing the Data Type of a Column: the TYPE
Clause
The keyword TYPE
changes the data type of an existing column to another, allowable type.
A type change that might result in data loss will be disallowed.
As an example, the number of characters in the new type for a CHAR
or VARCHAR
column cannot be smaller than the existing specification for it.
If the column was declared as an array, no change to its type or its number of dimensions is permitted.
The data type of a column that is involved in a foreign key, primary key or unique constraint cannot be changed at all.
Changing the Position of a Column: the POSITION
Clause
The POSITION
keyword changes the position of an existing column in the notional “left-to-right” layout of the record.
Numbering of column positions starts at 1.
-
If a position less than 1 is specified, an error message will be returned
-
If a position number is greater than the number of columns in the table, its new position will be adjusted silently to match the number of columns.
The DROP DEFAULT
and SET DEFAULT
Clauses
The optional DROP DEFAULT
clause deletes the current default value for the column.
-
If the column is based on a domain with a default value, the default value will revert to the domain default
-
An error will be raised if an attempt is made to delete the default value of a column which has no default value or whose default value is domain-based
The optional SET DEFAULT
clause sets a default value for the column.
If the column already has a default value, it will be replaced with the new one.
The default value applied to a column always overrides one inherited from a domain.
The SET NOT NULL
and DROP NOT NULL
Clauses
The SET NOT NULL
clause adds a NOT NULL
constraint on an existing table column.
Contrary to definition in CREATE TABLE
, it is not possible to specify a constraint name.
The successful addition of the NOT NULL
constraint is subject to a full data validation on the table, so ensure that the column has no nulls before attempting the change.
An explicit NOT NULL
constraint on domain-based column overrides domain settings.
In this scenario, changing the domain to be nullable does not extend to a table column.
Dropping the NOT NULL
constraint from the column if its type is a domain that also has a NOT NULL
constraint, has no observable effect until the NOT NULL
constraint is dropped from the domain as well.
The COMPUTED [BY]
or GENERATED ALWAYS AS
Clauses
The data type and expression underlying a computed column can be modified using a COMPUTED [BY]
or GENERATED ALWAYS AS
clause in the ALTER TABLE ALTER [COLUMN]
statement.
Conversion of a regular column to a computed one and vice versa is not permitted.
Changing Identity Columns
For identity columns (SET GENERATED {ALWAYS | BY DEFAULT}
) it is possible to modify several properties using the following clauses.
Identity Type
The SET GENERATED {ALWAYS | BY DEFAULT}
changes an identity column from ALWAYS
to BY DEFAULT
and vice versa.
It is not possible to use this to change a regular column to an identity column.
RESTART
The RESTART
clause restarts the sequence used for generating identity values.
If only the RESTART
clause is specified, then the sequence resets to the initial value specified when the identity column was defined.
If the optional WITH restart_value
clause is specified, the sequence will restart with the specified value.
In Firebird 3.0, It is currently not possible to change the configured start value. |
Changing SQL Security
Using the ALTER SQL SECURITY
or DROP SQL SECURITY
clauses, it is possible to change or drop the SQL Security property of a table.
After dropping SQL Security, the default value of the database is applied at runtime.
If the SQL Security property is changed for a table, triggers that do not have an explicit SQL Security property will not see the effect of the change until the next time the trigger is loaded into the metadata cache.
Replication Management
To stop replicating a table, use the DISABLE PUBLICATION
clause.
To start replicating a table, use the ENABLE PUBLICATION
clause.
The change in publication status takes effect at commit.
Attributes that Cannot Be Altered
The following alterations are not supported:
-
Changing the collation of a character type column
Who Can Alter a Table?
The ALTER TABLE
statement can be executed by:
-
The owner of the table
-
Users with the
ALTER ANY TABLE
privilege
Examples Using ALTER TABLE
-
Adding the
CAPITAL
column to theCOUNTRY
table.ALTER TABLE COUNTRY ADD CAPITAL VARCHAR(25);
-
Adding the
CAPITAL
column with theNOT NULL
andUNIQUE
constraint and deleting theCURRENCY
column.ALTER TABLE COUNTRY ADD CAPITAL VARCHAR(25) NOT NULL UNIQUE, DROP CURRENCY;
-
Adding the
CHK_SALARY
check constraint and a foreign key to theJOB
table.ALTER TABLE JOB ADD CONSTRAINT CHK_SALARY CHECK (MIN_SALARY < MAX_SALARY), ADD FOREIGN KEY (JOB_COUNTRY) REFERENCES COUNTRY (COUNTRY);
-
Setting default value for the
MODEL
field, changing the type of theITEMID
column and renaming the MODELNAME column.ALTER TABLE STOCK ALTER COLUMN MODEL SET DEFAULT 1, ALTER COLUMN ITEMID TYPE BIGINT, ALTER COLUMN MODELNAME TO NAME;
-
Restarting the sequence of an identity column.
ALTER TABLE objects ALTER ID RESTART WITH 100;
-
Changing the computed columns
NEW_SALARY
andSALARY_CHANGE
.ALTER TABLE SALARY_HISTORY ALTER NEW_SALARY GENERATED ALWAYS AS (OLD_SALARY + OLD_SALARY * PERCENT_CHANGE / 100), ALTER SALARY_CHANGE COMPUTED BY (OLD_SALARY * PERCENT_CHANGE / 100);
5.4.3. DROP TABLE
Drops a table
DSQL, ESQL
DROP TABLE tablename
Parameter | Description |
---|---|
tablename |
Name (identifier) of the table |
The DROP TABLE
statement drops (deletes) an existing table.
If the table has dependencies, the DROP TABLE
statement will fail with an error.
When a table is dropped, all its triggers and indexes will be deleted as well.
Who Can Drop a Table?
The DROP TABLE
statement can be executed by:
-
The owner of the table
-
Users with the
DROP ANY TABLE
privilege
5.4.4. RECREATE TABLE
Drops a table if it exists, and creates a table
DSQL
RECREATE [GLOBAL TEMPORARY] TABLE tablename [EXTERNAL [FILE] 'filespec'] (<col_def> [, {<col_def> | <tconstraint>} ...]) [{<table_attrs> | <gtt_table_attrs>}]
See the CREATE TABLE
section for the full syntax of CREATE TABLE
and descriptions of defining tables, columns and constraints.
RECREATE TABLE
creates or recreates a table.
If a table with this name already exists, the RECREATE TABLE
statement will try to drop it and create a new one.
Existing dependencies will prevent the statement from executing.
5.5. INDEX
An index is a database object used for faster data retrieval from a table or for speeding up the sorting in a query.
Indexes are also used to enforce the referential integrity constraints PRIMARY KEY
, FOREIGN KEY
and UNIQUE
.
This section describes how to create indexes, activate and deactivate them, drop them and collect statistics (recalculate selectivity) for them.
5.5.1. CREATE INDEX
Creates an index
DSQL, ESQL
CREATE [UNIQUE] [ASC[ENDING] | DESC[ENDING]] INDEX indexname ON tablename {(col [, col ...]) | COMPUTED BY (<expression>)} [WHERE <search_condition>]
Parameter | Description |
---|---|
indexname |
Index name. The maximum length is 63 characters |
tablename |
The name of the table for which the index is to be built |
col |
Name of a column in the table.
Columns of the types |
expression |
The expression that will compute the values for a computed index, also known as an “expression index” |
search_condition |
Conditional expression of a partial index, to filter the rows to include in the index. |
The CREATE INDEX
statement creates an index for a table that can be used to speed up searching, sorting and grouping.
Indexes are created automatically in the process of defining constraints, such as primary key, foreign key or unique constraints.
An index can be built on the content of columns of any data type except for BLOB
and arrays.
The name (identifier) of an index must be unique among all index names.
Who Can Create an Index?
The CREATE INDEX
statement can be executed by:
-
The owner of the table
-
Users with the
ALTER ANY TABLE
privilege
Unique Indexes
Specifying the keyword UNIQUE
in the index creation statement creates an index in which uniqueness will be enforced throughout the table.
The index is referred to as a “unique index”.
A unique index is not a constraint.
Unique indexes cannot contain duplicate key values (or duplicate key value combinations, in the case of compound, or multi-column, or multi-segment) indexes.
Duplicated NULL
s are permitted, in accordance with the SQL standard, in both single-segment and multi-segment indexes.
Partial Indexes
Specifying the WHERE
clause in the index creation statement creates a partial index (also knows as filtered index).
A partial index contains only rows that match the search condition of the WHERE
.
A partial index definition may include the UNIQUE
clause.
In this case, every key in the index is required to be unique.
This allows enforcing uniqueness for a subset of table rows.
A partial index is usable only in the following cases:
-
The
WHERE
clause of the statement includes exactly the same boolean expression as the one defined for the index; -
The search condition defined for the index contains ORed boolean expressions and one of them is explicitly included in the
WHERE
clause of the statement; -
The search condition defined for the index specifies
IS NOT NULL
and theWHERE
clause of the statement includes an expression on the same field that is known to exclude NULLs.
Index Direction
All indexes in Firebird are uni-directional.
An index may be constructed from the lowest value to the highest (ascending order) or from the highest value to the lowest (descending order).
The keywords ASC[ENDING]
and DESC[ENDING]
are used to specify the direction of the index.
The default index order is ASC[ENDING]
.
It is valid to define both an ascending and a descending index on the same column or key set.
A descending index can be useful on a column that will be subjected to searches on the high values (“newest”, maximum, etc.) |
Computed (Expression) Indexes
In creating an index, you can use the COMPUTED BY
clause to specify an expression instead of one or more columns.
Computed indexes are used in queries where the condition in a WHERE
, ORDER BY
or GROUP BY
clause exactly matches the expression in the index definition.
The expression in a computed index may involve several columns in the table.
Expression indexes can also be used as a workaround for indexing computed columns: use the name of the computed column as the expression. |
Limits on Indexes
Certain limits apply to indexes.
The maximum length of a key in an index is limited to a quarter of the page size.
Maximum Indexes per Table
The number of indexes that can be accommodated for each table is limited. The actual maximum for a specific table depends on the page size and the number of columns in the indexes.
Page Size |
Number of Indexes Depending on Column Count |
||
---|---|---|---|
Single |
2-Column |
3-Column |
|
4096 |
203 |
145 |
113 |
8192 |
408 |
291 |
227 |
16384 |
818 |
584 |
454 |
32768 |
1637 |
1169 |
909 |
Character Index Limits
The maximum indexed string length is 9 bytes less than the maximum key length. The maximum indexable string length depends on the page size, the character set, and the collation.
Page Size |
Maximum Indexable String Length by Charset Type |
|||
---|---|---|---|---|
1 byte/char |
2 byte/char |
3 byte/char |
4 byte/char |
|
4096 |
1015 |
507 |
338 |
253 |
8192 |
2039 |
1019 |
679 |
509 |
16384 |
4087 |
2043 |
1362 |
1021 |
32768 |
8183 |
4091 |
2727 |
2045 |
Depending on the collation, the maximum size can be further reduced as case-insensitive and accent-insensitive collations require more bytes per character in an index. See also Character Indexes in Chapter Data Types and Subtypes.
Parallelized Index Creation
Since Firebird 5.0, index creation can be parallelized.
Parallelization happens automatically if the current connection has two or more parallel workers — configured through ParallelWorkers
in firebird.conf
or isc_dpb_parallel_workers
— and the server has parallel workers available.
Examples Using CREATE INDEX
-
Creating an index for the
UPDATER_ID
column in theSALARY_HISTORY
tableCREATE INDEX IDX_UPDATER ON SALARY_HISTORY (UPDATER_ID);
-
Creating an index with keys sorted in the descending order for the
CHANGE_DATE
column in theSALARY_HISTORY
tableCREATE DESCENDING INDEX IDX_CHANGE ON SALARY_HISTORY (CHANGE_DATE);
-
Creating a multi-segment index for the
ORDER_STATUS
,PAID
columns in theSALES
tableCREATE INDEX IDX_SALESTAT ON SALES (ORDER_STATUS, PAID);
-
Creating an index that does not permit duplicate values for the
NAME
column in theCOUNTRY
tableCREATE UNIQUE INDEX UNQ_COUNTRY_NAME ON COUNTRY (NAME);
-
Creating a computed index for the
PERSONS
tableCREATE INDEX IDX_NAME_UPPER ON PERSONS COMPUTED BY (UPPER (NAME));
An index like this can be used for a case-insensitive search:
SELECT * FROM PERSONS WHERE UPPER(NAME) STARTING WITH UPPER('Iv');
-
Creating a partial index and using its condition:
CREATE INDEX IT1_COL ON T1 (COL) WHERE COL < 100; SELECT * FROM T1 WHERE COL < 100; -- PLAN (T1 INDEX (IT1_COL))
-
Creating a partial index which excludes NULL
CREATE INDEX IT1_COL2 ON T1 (COL) WHERE COL IS NOT NULL; SELECT * FROM T1 WHERE COL > 100; PLAN (T1 INDEX IT1_COL2)
-
Creating a partial index with ORed conditions
CREATE INDEX IT1_COL3 ON T1 (COL) WHERE COL = 1 OR COL = 2; SELECT * FROM T1 WHERE COL = 2; -- PLAN (T1 INDEX IT1_COL3)
-
Using a partial index to enforce uniqueness for a subset of rows
create table OFFER ( OFFER_ID bigint generated always as identity primary key, PRODUCT_ID bigint not null, ARCHIVED boolean default false not null, PRICE decimal(9,2) not null ); create unique index IDX_OFFER_UNIQUE_PRODUCT on OFFER (PRODUCT_ID) where not ARCHIVED; insert into OFFER (PRODUCT_ID, ARCHIVED, PRICE) values (1, false, 18.95); insert into OFFER (PRODUCT_ID, ARCHIVED, PRICE) values (1, true, 17.95); insert into OFFER (PRODUCT_ID, ARCHIVED, PRICE) values (1, true, 16.95); -- Next fails due to second record for PRODUCT_ID=1 and ARCHIVED=false: insert into OFFER (PRODUCT_ID, ARCHIVED, PRICE) values (1, false, 19.95); -- Statement failed, SQLSTATE = 23000 -- attempt to store duplicate value (visible to active transactions) in unique index "IDX_OFFER_UNIQUE_PRODUCT" -- -Problematic key value is ("PRODUCT_ID" = 1)
5.5.2. ALTER INDEX
Activates or deactivates an index, and rebuilds an index
DSQL, ESQL
ALTER INDEX indexname {ACTIVE | INACTIVE}
Parameter | Description |
---|---|
indexname |
Index name |
The ALTER INDEX
statement activates or deactivates an index.
There is no facility on this statement for altering any attributes of the index.
INACTIVE
-
With the
INACTIVE
option, the index is switched from the active to inactive state. The effect is similar to theDROP INDEX
statement except that the index definition remains in the database. Altering a constraint index to the inactive state is not permitted.An active index can be deactivated if there are no queries prepared using that index; otherwise, an “object in use” error is returned.
Activating an inactive index is also safe. However, if there are active transactions modifying the table, the transaction containing the
ALTER INDEX
statement will fail if it has theNOWAIT
attribute. If the transaction is inWAIT
mode, it will wait for completion of concurrent transactions.On the other side of the coin, if our
ALTER INDEX
succeeds and starts to rebuild the index atCOMMIT
, other transactions modifying that table will fail or wait, according to theirWAIT
/NO WAIT
attributes. The situation is the same forCREATE INDEX
.How is it Useful?It might be useful to switch an index to the inactive state whilst inserting, updating or deleting a large batch of records in the table that owns the index.
ACTIVE
-
Rebuilds the index (even if already active), and marks it as active.
How is it Useful?Even if the index is active when
ALTER INDEX … ACTIVE
is executed, the index will be rebuilt. Rebuilding indexes can be a useful piece of housekeeping to do, occasionally, on the indexes of a large table in a database that has frequent inserts, updates or deletes but is infrequently restored.
Who Can Alter an Index?
The ALTER INDEX
statement can be executed by:
-
The owner of the table
-
Users with the
ALTER ANY TABLE
privilege
Use of ALTER INDEX
on a Constraint Index
Altering the index of a PRIMARY KEY
, FOREIGN KEY
or UNIQUE
constraint to INACTIVE
is not permitted.
However, ALTER INDEX … ACTIVE
works just as well with constraint indexes as it does with others, as an index rebuilding tool.
5.5.3. DROP INDEX
Drops an index
DSQL, ESQL
DROP INDEX indexname
Parameter | Description |
---|---|
indexname |
Index name |
The DROP INDEX
statement drops (deletes) the named index from the database.
A constraint index cannot be dropped using DROP INDEX
.
Constraint indexes are dropped during the process of executing the command ALTER TABLE … DROP CONSTRAINT …
.
Who Can Drop an Index?
The DROP INDEX
statement can be executed by:
-
The owner of the table
-
Users with the
ALTER ANY TABLE
privilege
5.5.4. SET STATISTICS
Recalculates the selectivity of an index
DSQL, ESQL
SET STATISTICS INDEX indexname
Parameter | Description |
---|---|
indexname |
Index name |
The SET STATISTICS
statement recalculates the selectivity of the specified index.
Who Can Update Index Statistics?
The SET STATISTICS
statement can be executed by:
-
The owner of the table
-
Users with the
ALTER ANY TABLE
privilege
Index Selectivity
The selectivity of an index is the result of evaluating the number of rows that can be selected in a search on every index value. A unique index has the maximum selectivity because it is impossible to select more than one row for each value of an index key if it is used. Keeping the selectivity of an index up to date is important for the optimizer’s choices in seeking the most optimal query plan.
Index statistics in Firebird are not automatically recalculated in response to large batches of inserts, updates or deletions. It may be beneficial to recalculate the selectivity of an index after such operations because the selectivity tends to become outdated.
The statements CREATE INDEX
and ALTER INDEX ACTIVE
both store index statistics that correspond to the contents of the newly-[re]built index.
SET STATISTICS
can be performed under concurrent load without risk of corruption.
However, under concurrent load, the newly calculated statistics could become outdated as soon as SET STATISTICS
finishes.
5.6. VIEW
A view is a virtual table that is a stored and named SELECT
query for retrieving data of any complexity.
Data can be retrieved from one or more tables, from other views and also from selectable stored procedures.
Unlike regular tables in relational databases, a view is not an independent data set stored in the database. The result is dynamically created as a data set when the view is selected.
The metadata of a view are available to the process that generates the binary code for stored procedures and triggers, as though they were concrete tables storing persistent data.
Firebird does not support materialized views.
5.6.1. CREATE VIEW
Creates a view
DSQL
CREATE VIEW viewname [<full_column_list>] AS <select_statement> [WITH CHECK OPTION] <full_column_list> ::= (colname [, colname ...])
Parameter | Description |
---|---|
viewname |
View name. The maximum length is 63 characters |
select_statement |
SELECT statement |
full_column_list |
The list of columns in the view |
colname |
View column name. Duplicate column names are not allowed. |
The CREATE VIEW
statement creates a new view.
The identifier (name) of a view must be unique among the names of all views, tables, and stored procedures in the database.
The name of the new view can be followed by the list of column names that should be returned to the caller when the view is invoked. Names in the list do not have to be related to the names of the columns in the base tables from which they derive.
If the view column list is omitted, the system will use the column names and/or aliases from the SELECT
statement.
If duplicate names or non-aliased expression-derived columns make it impossible to obtain a valid list, creation of the view fails with an error.
The number of columns in the view’s list must match the number of columns in the selection list of the underlying SELECT
statement in the view definition.
-
If the full list of columns is specified, it makes no sense to specify aliases in the
SELECT
statement because the names in the column list will override them -
The column list is optional if all the columns in the
SELECT
are explicitly named and are unique in the selection list
Updatable Views
A view can be updatable or read-only.
If a view is updatable, the data retrieved when this view is called can be changed by the DML statements INSERT
, UPDATE
, DELETE
, UPDATE OR INSERT
or MERGE
.
Changes made in an updatable view are applied to the underlying table(s).
A read-only view can be made updatable with the use of triggers. Once triggers have been defined on a view, changes posted to it will never be written automatically to the underlying table, even if the view was updatable to begin with. It is the responsibility of the programmer to ensure that the triggers update (or delete from, or insert into) the base tables as needed.
A view will be automatically updatable if all the following conditions are met:
-
the
SELECT
statement queries only one table or one updatable view -
the
SELECT
statement does not call any stored procedures -
each base table (or base view) column not present in the view definition meets one of the following conditions:
-
it is nullable
-
it has a non-
NULL
default value -
it has a trigger that supplies a permitted value
-
-
the
SELECT
statement contains no fields derived from subqueries or other expressions -
the
SELECT
statement does not contain fields defined through aggregate functions (MIN
,MAX
,AVG
,SUM
,COUNT
,LIST
, etc.), statistical functions (CORR
,COVAR_POP
,COVAR_SAMP
, etc.), linear regression functions (REGR_AVGX
,REGR_AVGY
, etc.) or any type of window function -
the
SELECT
statement contains noORDER BY
,GROUP BY
orHAVING
clause -
the
SELECT
statement does not include the keywordDISTINCT
or row-restrictive keywords such asROWS
,FIRST
,SKIP
,OFFSET
orFETCH
The RETURNING clause and updatable views
The To report the right values in |
WITH CHECK OPTION
The optional WITH CHECK OPTION
clause requires an updatable view to check whether new or updated data meet the condition specified in the WHERE
clause of the SELECT
statement.
Every attempt to insert a new record or to update an existing one is checked whether the new or updated record would meet the WHERE
criteria.
If they fail the check, the operation is not performed and an error is raised.
WITH CHECK OPTION
can be specified only in a CREATE VIEW
statement in which a WHERE
clause is present to restrict the output of the main SELECT
statement.
An error message is returned otherwise.
Please note:
If Furthermore, view fields omitted from the For views that do not have |
Who Can Create a View?
The CREATE VIEW
statement can be executed by:
-
Users with the
CREATE VIEW
privilege
The creator of a view becomes its owner.
To create a view, a non-admin user also needs at least SELECT
access to the underlying table(s) and/or view(s), and the EXECUTE
privilege on any selectable stored procedures involved.
To enable insertions, updates and deletions through the view, the creator/owner must also possess the corresponding INSERT
, UPDATE
and DELETE
rights on the underlying object(s).
Granting other users privileges on the view is only possible if the view owner has these privileges on the underlying objects WITH GRANT OPTION
.
This will always be the case if the view owner is also the owner of the underlying objects.
Examples of Creating Views
-
Creating view returning the
JOB_CODE
andJOB_TITLE
columns only for those jobs whereMAX_SALARY
is less than $15,000.CREATE VIEW ENTRY_LEVEL_JOBS AS SELECT JOB_CODE, JOB_TITLE FROM JOB WHERE MAX_SALARY < 15000;
-
Creating a view returning the
JOB_CODE
andJOB_TITLE
columns only for those jobs whereMAX_SALARY
is less than $15,000. Whenever a new record is inserted or an existing record is updated, theMAX_SALARY < 15000
condition will be checked. If the condition is not true, the insert/update operation will be rejected.CREATE VIEW ENTRY_LEVEL_JOBS AS SELECT JOB_CODE, JOB_TITLE FROM JOB WHERE MAX_SALARY < 15000 WITH CHECK OPTION;
-
Creating a view with an explicit column list.
CREATE VIEW PRICE_WITH_MARKUP ( CODE_PRICE, COST, COST_WITH_MARKUP ) AS SELECT CODE_PRICE, COST, COST * 1.1 FROM PRICE;
-
Creating a view with the help of aliases for fields in the
SELECT
statement (the same result as in Example 3).CREATE VIEW PRICE_WITH_MARKUP AS SELECT CODE_PRICE, COST, COST * 1.1 AS COST_WITH_MARKUP FROM PRICE;
-
Creating a read-only view based on two tables and a stored procedure.
CREATE VIEW GOODS_PRICE AS SELECT goods.name AS goodsname, price.cost AS cost, b.quantity AS quantity FROM goods JOIN price ON goods.code_goods = price.code_goods LEFT JOIN sp_get_balance(goods.code_goods) b ON 1 = 1;
5.6.2. ALTER VIEW
Alters a view
DSQL
ALTER VIEW viewname [<full_column_list>] AS <select_statement> [WITH CHECK OPTION] <full_column_list> ::= (colname [, colname ...])
Parameter | Description |
---|---|
viewname |
Name of an existing view |
select_statement |
SELECT statement |
full_column_list |
The list of columns in the view |
colname |
View column name. Duplicate column names are not allowed. |
Use the ALTER VIEW
statement for changing the definition of an existing view.
Privileges for views remain intact and dependencies are not affected.
The syntax of the ALTER VIEW
statement corresponds with that of CREATE VIEW
.
Be careful when you change the number of columns in a view. Existing application code and PSQL modules that access the view may become invalid. For information on how to detect this kind of problem in stored procedures and trigger, see The RDB$VALID_BLR Field in the Appendix. |
Who Can Alter a View?
The ALTER VIEW
statement can be executed by:
-
The owner of the view
-
Users with the
ALTER ANY VIEW
privilege
5.6.3. CREATE OR ALTER VIEW
Creates a view if it doesn’t exist, or alters a view
DSQL
CREATE OR ALTER VIEW viewname [<full_column_list>] AS <select_statement> [WITH CHECK OPTION] <full_column_list> ::= (colname [, colname ...])
Parameter | Description |
---|---|
viewname |
Name of a view which may or may not exist |
select_statement |
SELECT statement |
full_column_list |
The list of columns in the view |
colname |
View column name. Duplicate column names are not allowed. |
Use the CREATE OR ALTER VIEW
statement for changing the definition of an existing view or creating it if it does not exist.
Privileges for an existing view remain intact and dependencies are not affected.
The syntax of the CREATE OR ALTER VIEW
statement corresponds with that of CREATE VIEW
.
5.6.4. DROP VIEW
Drops a view
DSQL
DROP VIEW viewname
Parameter | Description |
---|---|
viewname |
View name |
The DROP VIEW
statement drops (deletes) an existing view.
The statement will fail if the view has dependencies.
Who Can Drop a View?
The DROP VIEW
statement can be executed by:
-
The owner of the view
-
Users with the
DROP ANY VIEW
privilege
5.6.5. RECREATE VIEW
Drops a view if it exists, and creates a view
DSQL
RECREATE VIEW viewname [<full_column_list>] AS <select_statement> [WITH CHECK OPTION] <full_column_list> ::= (colname [, colname ...])
Parameter | Description |
---|---|
viewname |
View name. The maximum length is 63 characters |
select_statement |
SELECT statement |
full_column_list |
The list of columns in the view |
colname |
View column name. Duplicate column names are not allowed. |
Creates or recreates a view.
If there is a view with this name already, the engine will try to drop it before creating the new instance.
If the existing view cannot be dropped, because of dependencies or insufficient rights, for example, RECREATE VIEW
fails with an error.
5.7. TRIGGER
A trigger is a special type of stored procedure that is not called directly, instead it is executed when a specified event occurs. A DML trigger is specific to a single relation (table or view) and one phase in the timing of the event (BEFORE or AFTER). A DML trigger can be specified to execute for one specific event (insert, update, delete) or for a combination of those events.
Two other forms of trigger exist:
-
a “database trigger” can be specified to fire at the start or end of a user session (connection) or a user transaction.
-
a “DDL trigger” can be specified to fire before or after execution of one or more types of DDL statements.
5.7.1. CREATE TRIGGER
Creates a trigger
DSQL, ESQL
CREATE TRIGGER trigname { <relation_trigger_legacy> | <relation_trigger_sql> | <database_trigger> | <ddl_trigger> } {<psql_trigger> | <external-module-body>} <relation_trigger_legacy> ::= FOR {tablename | viewname} [ACTIVE | INACTIVE] {BEFORE | AFTER} <mutation_list> [POSITION number] <relation_trigger_sql> ::= [ACTIVE | INACTIVE] {BEFORE | AFTER} <mutation_list> ON {tablename | viewname} [POSITION number] <database_trigger> ::= [ACTIVE | INACTIVE] ON <db_event> [POSITION number] <ddl_trigger> ::= [ACTIVE | INACTIVE] {BEFORE | AFTER} <ddl_event> [POSITION number] <mutation_list> ::= <mutation> [OR <mutation> [OR <mutation>]] <mutation> ::= INSERT | UPDATE | DELETE <db_event> ::= CONNECT | DISCONNECT | TRANSACTION {START | COMMIT | ROLLBACK} <ddl_event> ::= ANY DDL STATEMENT | <ddl_event_item> [{OR <ddl_event_item>} ...] <ddl_event_item> ::= {CREATE | ALTER | DROP} TABLE | {CREATE | ALTER | DROP} PROCEDURE | {CREATE | ALTER | DROP} FUNCTION | {CREATE | ALTER | DROP} TRIGGER | {CREATE | ALTER | DROP} EXCEPTION | {CREATE | ALTER | DROP} VIEW | {CREATE | ALTER | DROP} DOMAIN | {CREATE | ALTER | DROP} ROLE | {CREATE | ALTER | DROP} SEQUENCE | {CREATE | ALTER | DROP} USER | {CREATE | ALTER | DROP} INDEX | {CREATE | DROP} COLLATION | ALTER CHARACTER SET | {CREATE | ALTER | DROP} PACKAGE | {CREATE | DROP} PACKAGE BODY | {CREATE | ALTER | DROP} MAPPING <psql_trigger> ::= [SQL SECURITY {INVOKER | DEFINER}] <psql-module-body> <psql-module-body> ::= !! See Syntax of Module Body !! <external-module-body> ::= !! See Syntax of Module Body !!
Parameter | Description |
---|---|
trigname |
Trigger name. The maximum length is 63 characters. It must be unique among all trigger names in the database. |
relation_trigger_legacy |
Legacy style of trigger declaration for a relation trigger |
relation_trigger_sql |
Relation trigger declaration compliant with the SQL standard |
database_trigger |
Database trigger declaration |
tablename |
Name of the table with which the relation trigger is associated |
viewname |
Name of the view with which the relation trigger is associated |
mutation_list |
List of relation (table | view) events |
number |
Position of the trigger in the firing order. From 0 to 32,767 |
db_event |
Connection or transaction event |
ddl_event |
List of metadata change events |
ddl_event_item |
One of the metadata change events |
The CREATE TRIGGER
statement is used for creating a new trigger.
A trigger can be created either for a relation (table | view) event (or a combination of relation events), for a database event, or for a DDL event.
CREATE TRIGGER
, along with its associates ALTER TRIGGER
, CREATE OR ALTER TRIGGER
and RECREATE TRIGGER
, is a compound statement, consisting of a header and a body.
The header specifies the name of the trigger, the name of the relation (for a DML trigger), the phase of the trigger, the event(s) it applies to, and the position to determine an order between triggers.
The trigger body consists of optional declarations of local variables and named cursors followed by one or more statements, or blocks of statements, all enclosed in an outer block that begins with the keyword BEGIN
and ends with the keyword END
.
Declarations and embedded statements are terminated with semicolons (‘;
’).
The name of the trigger must be unique among all trigger names.
Statement Terminators
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly some third-party editors — employ an internal convention that requires all statements to be terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the Terminator in isql.
SQL Security
The SQL SECURITY
clause specifies the security context for executing other routines or inserting into other tables.
By default, a trigger applies the SQL Security property defined on its table (or — if the table doesn’t have the SQL Security property set — the database default), but it can be overridden by specifying it explicitly.
If the SQL Security property is changed for the table, triggers that do not have an explicit SQL Security property will not see the effect of the change until the next time the trigger is loaded into the metadata cache.
See also SQL Security in chapter Security.
The Trigger Body
The trigger body is either a PSQL body, or an external UDR module body.
See The Module Body in the PSQL chapter for details.
DML Triggers (on Tables or Views)
DML — or “relation” — triggers are executed at the row (record) level, every time a row is changed.
A trigger can be either ACTIVE
or INACTIVE
.
Only active triggers are executed.
Triggers are created ACTIVE
by default.
Who Can Create a DML Trigger?
DML triggers can be created by:
-
The owner of the table (or view)
-
Users with — for a table — the
ALTER ANY TABLE
, or — for a view —ALTER ANY VIEW
privilege
Forms of Declaration
Firebird supports two forms of declaration for relation triggers:
-
The legacy syntax
-
The SQL standard-compliant form (recommended)
A relation trigger specifies — among other things — a phase and one or more events.
Phase
Phase concerns the timing of the trigger with regard to the change-of-state event in the row of data:
-
A
BEFORE
trigger is fired before the specified database operation (insert, update or delete) is carried out -
An
AFTER
trigger is fired after the database operation has been completed
Row Events
A relation trigger definition specifies at least one of the DML operations INSERT
, UPDATE
and DELETE
, to indicate one or more events on which the trigger should fire.
If multiple operations are specified, they must be separated by the keyword OR
.
No operation may occur more than once.
Firing Order of Triggers
The keyword POSITION
allows an optional execution order (“firing order”) to be specified for a series of triggers that have the same phase and event as their target.
The default position is 0.
If multiple triggers have the same position and phase, those triggers will be executed in an undefined order, while respecting the total order by position and phase.
Examples of CREATE TRIGGER
for Tables and Views
-
Creating a trigger in the “legacy” form, firing before the event of inserting a new record into the
CUSTOMER
table occurs.CREATE TRIGGER SET_CUST_NO FOR CUSTOMER ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF (NEW.CUST_NO IS NULL) THEN NEW.CUST_NO = GEN_ID(CUST_NO_GEN, 1); END
-
Creating a trigger firing before the event of inserting a new record into the
CUSTOMER
table in the SQL standard-compliant form.CREATE TRIGGER set_cust_no ACTIVE BEFORE INSERT ON customer POSITION 0 AS BEGIN IF (NEW.cust_no IS NULL) THEN NEW.cust_no = GEN_ID(cust_no_gen, 1); END
-
Creating a trigger that will file after either inserting, updating or deleting a record in the
CUSTOMER
table.CREATE TRIGGER TR_CUST_LOG ACTIVE AFTER INSERT OR UPDATE OR DELETE ON CUSTOMER POSITION 10 AS BEGIN INSERT INTO CHANGE_LOG (LOG_ID, ID_TABLE, TABLE_NAME, MUTATION) VALUES (NEXT VALUE FOR SEQ_CHANGE_LOG, OLD.CUST_NO, 'CUSTOMER', CASE WHEN INSERTING THEN 'INSERT' WHEN UPDATING THEN 'UPDATE' WHEN DELETING THEN 'DELETE' END); END
-
With
DEFINER
set for triggertr_ins
, userUS
needs only theINSERT
privilege ontr
. If it were set forINVOKER
, either the user or the trigger would also need theINSERT
privilege on tablet
.create table tr (i integer); create table t (i integer); set term ^; create trigger tr_ins for tr after insert SQL SECURITY DEFINER as begin insert into t values (NEW.i); end^ set term ;^ grant insert on table tr to user us; commit; connect 'localhost:/tmp/29.fdb' user us password 'pas'; insert into tr values(2);
The result would be the same if
SQL SECURITY DEFINER
were specified for tableTR
:create table tr (i integer) SQL SECURITY DEFINER; create table t (i integer); set term ^; create trigger tr_ins for tr after insert as begin insert into t values (NEW.i); end^ set term ;^ grant insert on table tr to user us; commit; connect 'localhost:/tmp/29.fdb' user us password 'pas'; insert into tr values(2);
Database Triggers
Triggers can be defined to fire upon “database events”; a mixture of events that act across the scope of a session (connection), and events that act across the scope of an individual transaction:
-
CONNECT
-
DISCONNECT
-
TRANSACTION START
-
TRANSACTION COMMIT
-
TRANSACTION ROLLBACK
DDL Triggers are a subtype of database triggers, covered in a separate section.
Who Can Create a Database Trigger?
Database triggers can be created by:
-
Users with the
ALTER DATABASE
privilege
Execution of Database Triggers and Exception Handling
CONNECT
and DISCONNECT
triggers are executed in a transaction created specifically for this purpose.
This transaction uses the default isolation level, i.e. snapshot (concurrency), write and wait.
If all goes well, the transaction is committed.
Uncaught exceptions cause the transaction to roll back, and
-
for a
CONNECT
trigger, the connection is then broken and the exception is returned to the client -
for a
DISCONNECT
trigger, exceptions are not reported. The connection is broken as intended
TRANSACTION
triggers are executed within the transaction whose start, commit or rollback evokes them.
The action taken after an uncaught exception depends on the event:
-
In a
TRANSACTION START
trigger, the exception is reported to the client and the transaction is rolled back -
In a
TRANSACTION COMMIT
trigger, the exception is reported, the trigger’s actions so far are undone and the commit is cancelled -
In a
TRANSACTION ROLLBACK
trigger, the exception is not reported and the transaction is rolled back as intended.
There is no direct way of knowing if a DISCONNECT
or TRANSACTION ROLLBACK
trigger caused an exception.
It also follows that the connection to the database cannot happen if a CONNECT
trigger causes an exception and a transaction cannot start if a TRANSACTION START
trigger causes one, either.
Both phenomena effectively lock you out of your database until you get in there with database triggers suppressed and fix the bad code.
Some Firebird command-line tools have been supplied with switches that an administrator can use to suppress the automatic firing of database triggers. So far, they are:
gbak -nodbtriggers
isql -nodbtriggers
nbackup -T
In a two-phase commit scenario, TRANSACTION COMMIT
triggers fire in the prepare phase, not at the commit.
-
The use of the
IN AUTONOMOUS TRANSACTION DO
statement in the database event triggers related to transactions (TRANSACTION START
,TRANSACTION ROLLBACK
,TRANSACTION COMMIT
) may cause the autonomous transaction to enter an infinite loop -
The
DISCONNECT
andTRANSACTION ROLLBACK
event triggers will not be executed when clients are disconnected via monitoring tables (DELETE FROM MON$ATTACHMENTS
)
Examples of CREATE TRIGGER
for “Database Triggers”
-
Creating a trigger for the event of connecting to the database that logs users logging into the system. The trigger is created as inactive.
CREATE TRIGGER tr_log_connect INACTIVE ON CONNECT POSITION 0 AS BEGIN INSERT INTO LOG_CONNECT (ID, USERNAME, ATIME) VALUES (NEXT VALUE FOR SEQ_LOG_CONNECT, CURRENT_USER, CURRENT_TIMESTAMP); END
-
Creating a trigger for the event of connecting to the database that does not permit any users, except for SYSDBA, to log in during off hours.
CREATE EXCEPTION E_INCORRECT_WORKTIME 'The working day has not started yet.'; CREATE TRIGGER TR_LIMIT_WORKTIME ACTIVE ON CONNECT POSITION 1 AS BEGIN IF ((CURRENT_USER <> 'SYSDBA') AND NOT (CURRENT_TIME BETWEEN time '9:00' AND time '17:00')) THEN EXCEPTION E_INCORRECT_WORKTIME; END
DDL Triggers
DDL triggers allow restrictions to be placed on users who attempt to create, alter or drop a DDL object. Their other purposes is to keep a metadata change log.
DDL triggers fire on specified metadata changes events in a specified phase.
BEFORE
triggers run before changes to system tables.
AFTER
triggers run after changes in system tables.
The event type |
In a sense, DDL triggers are a sub-type of database triggers.
Who Can Create a DDL Trigger?
DDL triggers can be created by:
-
Users with the
ALTER DATABASE
privilege
Suppressing DDL Triggers
A DDL trigger is a type of database trigger. See Suppressing Database Triggers how to suppress DDL — and database — triggers.
Examples of DDL Triggers
-
Here is how you might use a DDL trigger to enforce a consistent naming scheme, in this case, stored procedure names should begin with the prefix “
SP_
”:set auto on; create exception e_invalid_sp_name 'Invalid SP name (should start with SP_)'; set term !; create trigger trig_ddl_sp before CREATE PROCEDURE as begin if (rdb$get_context('DDL_TRIGGER', 'OBJECT_NAME') not starting 'SP_') then exception e_invalid_sp_name; end!
Test
create procedure sp_test as begin end! create procedure test as begin end! -- The last command raises this exception and procedure TEST is not created -- Statement failed, SQLSTATE = 42000 -- exception 1 -- -E_INVALID_SP_NAME -- -Invalid SP name (should start with SP_) -- -At trigger 'TRIG_DDL_SP' line: 4, col: 5 set term ;!
-
Implement custom DDL security, in this case restricting the running of DDL commands to certain users:
create exception e_access_denied 'Access denied'; set term !; create trigger trig_ddl before any ddl statement as begin if (current_user <> 'SUPER_USER') then exception e_access_denied; end!
Test
create procedure sp_test as begin end! -- The last command raises this exception and procedure SP_TEST is not created -- Statement failed, SQLSTATE = 42000 -- exception 1 -- -E_ACCESS_DENIED -- -Access denied -- -At trigger 'TRIG_DDL' line: 4, col: 5 set term ;!
Firebird has privileges for executing DDL statements, so writing a DDL trigger for this should be a last resort, if the same effect cannot be achieved using privileges.
-
Use a trigger to log DDL actions and attempts:
create sequence ddl_seq; create table ddl_log ( id bigint not null primary key, moment timestamp not null, user_name varchar(63) not null, event_type varchar(25) not null, object_type varchar(25) not null, ddl_event varchar(25) not null, object_name varchar(63) not null, sql_text blob sub_type text not null, ok char(1) not null ); set term !; create trigger trig_ddl_log_before before any ddl statement as declare id type of column ddl_log.id; begin -- We do the changes in an AUTONOMOUS TRANSACTION, so if an exception happens -- and the command didn't run, the log will survive. in autonomous transaction do begin insert into ddl_log (id, moment, user_name, event_type, object_type, ddl_event, object_name, sql_text, ok) values (next value for ddl_seq, current_timestamp, current_user, rdb$get_context('DDL_TRIGGER', 'EVENT_TYPE'), rdb$get_context('DDL_TRIGGER', 'OBJECT_TYPE'), rdb$get_context('DDL_TRIGGER', 'DDL_EVENT'), rdb$get_context('DDL_TRIGGER', 'OBJECT_NAME'), rdb$get_context('DDL_TRIGGER', 'SQL_TEXT'), 'N') returning id into id; rdb$set_context('USER_SESSION', 'trig_ddl_log_id', id); end end!
The above trigger will fire for this DDL command. It’s a good idea to use
-nodbtriggers
when working with them!create trigger trig_ddl_log_after after any ddl statement as begin -- Here we need an AUTONOMOUS TRANSACTION because the original transaction -- will not see the record inserted on the BEFORE trigger autonomous -- transaction if user transaction is not READ COMMITTED. in autonomous transaction do update ddl_log set ok = 'Y' where id = rdb$get_context('USER_SESSION', 'trig_ddl_log_id'); end! commit! set term ;! -- Delete the record about trig_ddl_log_after creation. delete from ddl_log; commit;
Test
-- This will be logged one time -- (as T1 did not exist, RECREATE acts as CREATE) with OK = Y. recreate table t1 ( n1 integer, n2 integer ); -- This will fail as T1 already exists, so OK will be N. create table t1 ( n1 integer, n2 integer ); -- T2 does not exist. There will be no log. drop table t2; -- This will be logged twice -- (as T1 exists, RECREATE acts as DROP and CREATE) with OK = Y. recreate table t1 ( n integer ); commit;
select id, ddl_event, object_name, sql_text, ok from ddl_log order by id; ID DDL_EVENT OBJECT_NAME SQL_TEXT OK === ========================= ======================= ================= ====== 2 CREATE TABLE T1 80:3 Y ==================================================== SQL_TEXT: recreate table t1 ( n1 integer, n2 integer ) ==================================================== 3 CREATE TABLE T1 80:2 N ==================================================== SQL_TEXT: create table t1 ( n1 integer, n2 integer ) ==================================================== 4 DROP TABLE T1 80:6 Y ==================================================== SQL_TEXT: recreate table t1 ( n integer ) ==================================================== 5 CREATE TABLE T1 80:9 Y ==================================================== SQL_TEXT: recreate table t1 ( n integer ) ====================================================
ALTER TRIGGER
, CREATE OR ALTER TRIGGER
, RECREATE TRIGGER
, DROP TRIGGER
, DDL Triggers in Chapter Procedural SQL (PSQL) Statements
5.7.2. ALTER TRIGGER
Alters a trigger
DSQL, ESQL
ALTER TRIGGER trigname
[ACTIVE | INACTIVE]
[{BEFORE | AFTER} <mutation_list>]
[POSITION number]
[{<psql_trigger> | <external-module-body>}]
<psql_trigger> ::=
[<sql_security>]
[<psql-module-body>]
<sql_security> ::=
SQL SECURITY {INVOKER | DEFINER}
| DROP SQL SECURITY
!! See syntax of CREATE TRIGGER
for further rules !!
The ALTER TRIGGER
statement only allows certain changes to the header and body of a trigger.
Permitted Changes to Triggers
-
Status (
ACTIVE | INACTIVE
) -
Phase (
BEFORE | AFTER
) (of DML triggers) -
Events (of DML triggers)
-
Position in the firing order
-
Modifications to code in the trigger body
If an element is not specified, it remains unchanged.
A DML trigger cannot be changed to a database or DDL trigger. It is not possible to change the event(s) or phase of a database or DDL trigger. |
Reminders
The More than one DML event — The keyword |
Who Can Alter a Trigger?
DML triggers can be altered by:
-
The owner of the table (or view)
-
Users with — for a table — the
ALTER ANY TABLE
, or — for a view —ALTER ANY VIEW
privilege
Database and DDL triggers can be altered by:
-
Users with the
ALTER DATABASE
privilege
Examples using ALTER TRIGGER
-
Deactivating the
set_cust_no
trigger (switching it to the inactive status).ALTER TRIGGER set_cust_no INACTIVE;
-
Changing the firing order position of the
set_cust_no
trigger.ALTER TRIGGER set_cust_no POSITION 14;
-
Switching the
TR_CUST_LOG
trigger to the inactive status and modifying the list of events.ALTER TRIGGER TR_CUST_LOG INACTIVE AFTER INSERT OR UPDATE;
-
Switching the
tr_log_connect trigger
to the active status, changing its position and body.ALTER TRIGGER tr_log_connect ACTIVE POSITION 1 AS BEGIN INSERT INTO LOG_CONNECT (ID, USERNAME, ROLENAME, ATIME) VALUES (NEXT VALUE FOR SEQ_LOG_CONNECT, CURRENT_USER, CURRENT_ROLE, CURRENT_TIMESTAMP); END
5.7.3. CREATE OR ALTER TRIGGER
Creates a trigger if it doesn’t exist, or alters a trigger
DSQL
CREATE OR ALTER TRIGGER trigname
{ <relation_trigger_legacy>
| <relation_trigger_sql>
| <database_trigger>
| <ddl_trigger> }
{<psql_trigger> | <external-module-body>}
!! See syntax of CREATE TRIGGER
for further rules !!
The CREATE OR ALTER TRIGGER
statement creates a new trigger if it does not exist;
otherwise it alters and recompiles it with the privileges intact and dependencies unaffected.
5.7.4. DROP TRIGGER
Drops a trigger
DSQL, ESQL
DROP TRIGGER trigname
Parameter | Description |
---|---|
trigname |
Trigger name |
The DROP TRIGGER
statement drops (deletes) an existing trigger.
Who Can Drop a Trigger?
DML triggers can be dropped by:
-
The owner of the table (or view)
-
Users with — for a table — the
ALTER ANY TABLE
, or — for a view —ALTER ANY VIEW
privilege
Database and DDL triggers can be dropped by:
-
Users with the
ALTER DATABASE
privilege
5.7.5. RECREATE TRIGGER
Drops a trigger if it exists, and creates a trigger
DSQL
RECREATE TRIGGER trigname
{ <relation_trigger_legacy>
| <relation_trigger_sql>
| <database_trigger>
| <ddl_trigger> }
{<psql_trigger> | <external-module-body>}
!! See syntax of CREATE TRIGGER
for further rules !!
The RECREATE TRIGGER
statement creates a new trigger if no trigger with the specified name exists;
otherwise the RECREATE TRIGGER
statement tries to drop the existing trigger and create a new one.
The operation will fail on COMMIT
if the trigger is in use.
Be aware that dependency errors are not detected until the |
5.8. PROCEDURE
A stored procedure is a software module that can be called from a client, another procedure, function, executable block or trigger. Stored procedures are written in procedural SQL (PSQL) or defined using a UDR (User-Defined Routine). Most SQL statements are available in PSQL as well, sometimes with limitations or extensions. Notable limitations are the prohibition on DDL and transaction control statements in PSQL.
Stored procedures can have many input and output parameters.
5.8.1. CREATE PROCEDURE
Creates a stored procedure
DSQL, ESQL
CREATE PROCEDURE procname [ ( [ <in_params> ] ) ] [RETURNS (<out_params>)] {<psql_procedure> | <external-module-body>} <in_params> ::= <inparam> [, <inparam> ...] <inparam> ::= <param_decl> [{= | DEFAULT} <value>] <out_params> ::= <outparam> [, <outparam> ...] <outparam> ::= <param_decl> <value> ::= {<literal> | NULL | <context_var>} <param_decl> ::= paramname <domain_or_non_array_type> [NOT NULL] [COLLATE collation] <type> ::= <datatype> | [TYPE OF] domain | TYPE OF COLUMN rel.col <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !! <psql_procedure> ::= [SQL SECURITY {INVOKER | DEFINER}] <psql-module-body> <psql-module-body> ::= !! See Syntax of Module Body !! <external-module-body> ::= !! See Syntax of Module Body !!
Parameter | Description |
---|---|
procname |
Stored procedure name. The maximum length is 63 characters. Must be unique among all table, view and procedure names in the database |
inparam |
Input parameter description |
outparam |
Output parameter description |
literal |
A literal value that is assignment-compatible with the data type of the parameter |
context_var |
Any context variable whose type is compatible with the data type of the parameter |
paramname |
The name of an input or output parameter of the procedure. The maximum length is 63 characters. The name of the parameter must be unique among input and output parameters of the procedure and its local variables |
collation |
Collation |
The CREATE PROCEDURE
statement creates a new stored procedure.
The name of the procedure must be unique among the names of all stored procedures, tables, and views in the database.
CREATE PROCEDURE
is a compound statement, consisting of a header and a body.
The header specifies the name of the procedure and declares input parameters and the output parameters, if any, that are to be returned by the procedure.
The procedure body consists of declarations for any local variables, named cursors, and subroutines that will be used by the procedure, followed by one or more statements, or blocks of statements, all enclosed in an outer block that begins with the keyword BEGIN
and ends with the keyword END
.
Declarations and embedded statements are terminated with semicolons (‘;
’).
Statement Terminators
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly some third-party editors — employ an internal convention that requires all statements to be terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the Terminator in isql.
Parameters
Each parameter has a data type.
The NOT NULL
constraint can also be specified for any parameter, to prevent NULL
being passed or assigned to it.
A collation can be specified for string-type parameters, using the COLLATE
clause.
- Input Parameters
-
Input parameters are presented as a parenthesized list following the name of the function. They are passed by value into the procedure, so any changes inside the procedure has no effect on the parameters in the caller. Input parameters may have default values. Parameters with default values specified must be added at the end of the list of parameters.
- Output Parameters
-
The optional
RETURNS
clause is for specifying a parenthesised list of output parameters for the stored procedure.
SQL Security
The SQL SECURITY
clause specifies the security context for executing other routines or inserting into other tables.
When SQL Security is not specified, the default value of the database is applied at runtime.
The SQL SECURITY
clause can only be specified for PSQL procedures, and is not valid for procedures defined in a package.
See also SQL Security in chapter Security.
Variable, Cursor and Subroutine Declarations
The optional declarations section, located at the start of the body of the procedure definition, defines variables (including cursors) and subroutines local to the procedure.
Local variable declarations follow the same rules as parameters regarding specification of the data type.
See details in the PSQL chapter for DECLARE VARIABLE
, DECLARE CURSOR
, DECLARE FUNCTION
, and DECLARE PROCEDURE
.
External UDR Procedures
A stored procedure can also be located in an external module.
In this case, instead of a procedure body, the CREATE PROCEDURE
specifies the location of the procedure in the external module using the EXTERNAL
clause.
The optional NAME
clause specifies the name of the external module, the name of the procedure inside the module, and — optionally — user-defined information.
The required ENGINE
clause specifies the name of the UDR engine that handles communication between Firebird and the external module.
The optional AS
clause accepts a string literal “body”, which can be used by the engine or module for various purposes.
Who Can Create a Procedure
The CREATE PROCEDURE
statement can be executed by:
-
Users with the
CREATE PROCEDURE
privilege
The user executing the CREATE PROCEDURE
statement becomes the owner of the table.
Examples
-
Creating a stored procedure that inserts a record into the
BREED
table and returns the code of the inserted record:CREATE PROCEDURE ADD_BREED ( NAME D_BREEDNAME, /* Domain attributes are inherited */ NAME_EN TYPE OF D_BREEDNAME, /* Only the domain type is inherited */ SHORTNAME TYPE OF COLUMN BREED.SHORTNAME, /* The table column type is inherited */ REMARK VARCHAR(120) CHARACTER SET WIN1251 COLLATE PXW_CYRL, CODE_ANIMAL INT NOT NULL DEFAULT 1 ) RETURNS ( CODE_BREED INT ) AS BEGIN INSERT INTO BREED ( CODE_ANIMAL, NAME, NAME_EN, SHORTNAME, REMARK) VALUES ( :CODE_ANIMAL, :NAME, :NAME_EN, :SHORTNAME, :REMARK) RETURNING CODE_BREED INTO CODE_BREED; END
-
Creating a selectable stored procedure that generates data for mailing labels (from
employee.fdb
):CREATE PROCEDURE mail_label (cust_no INTEGER) RETURNS (line1 CHAR(40), line2 CHAR(40), line3 CHAR(40), line4 CHAR(40), line5 CHAR(40), line6 CHAR(40)) AS DECLARE VARIABLE customer VARCHAR(25); DECLARE VARIABLE first_name VARCHAR(15); DECLARE VARIABLE last_name VARCHAR(20); DECLARE VARIABLE addr1 VARCHAR(30); DECLARE VARIABLE addr2 VARCHAR(30); DECLARE VARIABLE city VARCHAR(25); DECLARE VARIABLE state VARCHAR(15); DECLARE VARIABLE country VARCHAR(15); DECLARE VARIABLE postcode VARCHAR(12); DECLARE VARIABLE cnt INTEGER; BEGIN line1 = ''; line2 = ''; line3 = ''; line4 = ''; line5 = ''; line6 = ''; SELECT customer, contact_first, contact_last, address_line1, address_line2, city, state_province, country, postal_code FROM CUSTOMER WHERE cust_no = :cust_no INTO :customer, :first_name, :last_name, :addr1, :addr2, :city, :state, :country, :postcode; IF (customer IS NOT NULL) THEN line1 = customer; IF (first_name IS NOT NULL) THEN line2 = first_name || ' ' || last_name; ELSE line2 = last_name; IF (addr1 IS NOT NULL) THEN line3 = addr1; IF (addr2 IS NOT NULL) THEN line4 = addr2; IF (country = 'USA') THEN BEGIN IF (city IS NOT NULL) THEN line5 = city || ', ' || state || ' ' || postcode; ELSE line5 = state || ' ' || postcode; END ELSE BEGIN IF (city IS NOT NULL) THEN line5 = city || ', ' || state; ELSE line5 = state; line6 = country || ' ' || postcode; END SUSPEND; -- the statement that sends an output row to the buffer -- and makes the procedure "selectable" END
-
With
DEFINER
set for procedurep
, userUS
needs only theEXECUTE
privilege onp
. If it were set forINVOKER
, either the user or the procedure would also need theINSERT
privilege on tablet
.set term ^; create procedure p (i integer) SQL SECURITY DEFINER as begin insert into t values (:i); end^ set term ;^ grant execute on procedure p to user us; commit; connect 'localhost:/tmp/17.fdb' user us password 'pas'; execute procedure p(1);
5.8.2. ALTER PROCEDURE
Alters a stored procedure
DSQL, ESQL
ALTER PROCEDURE procname [ ( [ <in_params> ] ) ]
[RETURNS (<out_params>)]
{<psql_procedure> | <external-module-body>}
!! See syntax of CREATE PROCEDURE
for further rules !!
The ALTER PROCEDURE
statement allows the following changes to a stored procedure definition:
-
the set and characteristics of input and output parameters
-
local variables
-
code in the body of the stored procedure
After ALTER PROCEDURE
executes, existing privileges remain intact and dependencies are not affected.
Altering a procedure without specifying the SQL SECURITY
clause will remove the SQL Security property if currently set for this procedure.
This means the behaviour will revert to the database default.
Take care about changing the number and type of input and output parameters in stored procedures.
Existing application code and procedures and triggers that call it could become invalid because the new description of the parameters is incompatible with the old calling format.
For information on how to troubleshoot such a situation, see the article The |
Who Can Alter a Procedure
The ALTER PROCEDURE
statement can be executed by:
-
The owner of the stored procedure
-
Users with the
ALTER ANY PROCEDURE
privilege
5.8.3. CREATE OR ALTER PROCEDURE
Creates a stored procedure if it does not exist, or alters a stored procedure
DSQL
CREATE OR ALTER PROCEDURE procname [ ( [ <in_params> ] ) ]
[RETURNS (<out_params>)]
{<psql_procedure> | <external-module-body>}
!! See syntax of CREATE PROCEDURE
for further rules !!
The CREATE OR ALTER PROCEDURE
statement creates a new stored procedure or alters an existing one.
If the stored procedure does not exist, it will be created by invoking a CREATE PROCEDURE
statement transparently.
If the procedure already exists, it will be altered and compiled without affecting its existing privileges and dependencies.
5.8.4. DROP PROCEDURE
Drops a stored procedure
DSQL, ESQL
DROP PROCEDURE procname
Parameter | Description |
---|---|
procname |
Name of an existing stored procedure |
The DROP PROCEDURE
statement deletes an existing stored procedure.
If the stored procedure has any dependencies, the attempt to delete it will fail and raise an error.
Who Can Drop a Procedure
The DROP PROCEDURE
statement can be executed by:
-
The owner of the stored procedure
-
Users with the
DROP ANY PROCEDURE
privilege
5.8.5. RECREATE PROCEDURE
Drops a stored procedure if it exists, and creates a stored procedure
DSQL
RECREATE PROCEDURE procname [ ( [ <in_params> ] ) ]
[RETURNS (<out_params>)]
{<psql_procedure> | <external-module-body>}
!! See syntax of CREATE PROCEDURE
for further rules !!
The RECREATE PROCEDURE
statement creates a new stored procedure or recreates an existing one.
If a procedure with this name already exists, the engine will try to drop it and create a new one.
Recreating an existing procedure will fail at the COMMIT
request if the procedure has dependencies.
Be aware that dependency errors are not detected until the |
After a procedure is successfully recreated, privileges to execute the stored procedure, and the privileges of the stored procedure itself are dropped.
RECREATE PROCEDURE
Example
GET_EMP_PROJ
stored procedure or recreating the existing GET_EMP_PROJ
stored procedure.RECREATE PROCEDURE GET_EMP_PROJ (
EMP_NO SMALLINT)
RETURNS (
PROJ_ID VARCHAR(20))
AS
BEGIN
FOR SELECT
PROJ_ID
FROM
EMPLOYEE_PROJECT
WHERE
EMP_NO = :emp_no
INTO :proj_id
DO
SUSPEND;
END
5.9. FUNCTION
A stored function is a user-defined function stored in the metadata of a database, and running on the server. Stored functions can be called by stored procedures, stored functions (including the function itself), triggers and DSQL. When a stored function calls itself, such a stored function is called a recursive function.
Unlike stored procedures, stored functions always return a single scalar value.
To return a value from a stored functions, use the RETURN
statement, which immediately ends the function.
5.9.1. CREATE FUNCTION
Creates a stored function
DSQL
CREATE FUNCTION funcname [ ( [ <in_params> ] ) ] RETURNS <domain_or_non_array_type> [COLLATE collation] [DETERMINISTIC] {<psql_function> | <external-module-body>} <in_params> ::= <inparam> [, <inparam> ... ] <inparam> ::= <param-decl> [ { = | DEFAULT } <value> ] <value> ::= { <literal> | NULL | <context-var> } <param-decl> ::= paramname <domain_or_non_array_type> [NOT NULL] [COLLATE collation] <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !! <psql_function> ::= [SQL SECURITY {INVOKER | DEFINER}] <psql-module-body> <psql-module-body> ::= !! See Syntax of Module Body !! <external-module-body> ::= !! See Syntax of Module Body !!
Parameter | Description |
---|---|
funcname |
Stored function name. The maximum length is 63 characters. Must be unique among all function names in the database. |
inparam |
Input parameter description |
collation |
Collation |
literal |
A literal value that is assignment-compatible with the data type of the parameter |
context-var |
Any context variable whose type is compatible with the data type of the parameter |
paramname |
The name of an input parameter of the function. The maximum length is 63 characters. The name of the parameter must be unique among input parameters of the function and its local variables. |
The CREATE FUNCTION
statement creates a new stored function.
The stored function name must be unique among the names of all stored and external (legacy) functions, excluding sub-functions or functions in packages.
For sub-functions or functions in packages, the name must be unique within its module (package, stored procedure, stored function, trigger).
It is advisable to not reuse function names between global stored functions and stored functions in packages, although this is legal. At the moment, it is not possible to call a function or procedure from the global namespace from inside a package, if that package defines a function or procedure with the same name. In that situation, the function or procedure of the package will be called. |
CREATE FUNCTION
is a compound statement with a header and a body.
The header defines the name of the stored function, and declares input parameters and return type.
The function body consists of optional declarations of local variables, named cursors, and subroutines (sub-functions and sub-procedures), and one or more statements or statement blocks, enclosed in an outer block that starts with the keyword BEGIN
and ends with the keyword END
.
Declarations and statements inside the function body must be terminated with a semicolon (‘;
’).
Statement Terminators
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly some third-party editors — employ an internal convention that requires all statements to be terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the Terminator in isql.
Parameters
Each parameter has a data type.
A collation can be specified for string-type parameters, using the COLLATE
clause.
- Input Parameters
-
Input parameters are presented as a parenthesized list following the name of the function. They are passed by value into the function, so any changes inside the function has no effect on the parameters in the caller. The
NOT NULL
constraint can also be specified for any input parameter, to preventNULL
being passed or assigned to it. Input parameters may have default values. Parameters with default values specified must be added at the end of the list of parameters. - Output Parameter
-
The
RETURNS
clause specifies the return type of the stored function. If a function returns a string value, then it is possible to specify the collation using theCOLLATE
clause. As a return type, you can specify a data type, a domain, the type of a domain (usingTYPE OF
), or the type of a column of a table or view (usingTYPE OF COLUMN
).
Deterministic functions
The optional DETERMINISTIC
clause indicates that the function is deterministic.
Deterministic functions always return the same result for the same set of inputs.
Non-deterministic functions can return different results for each invocation, even for the same set of inputs.
If a function is specified as deterministic, then such a function might not be called again if it has already been called once with the given set of inputs, and instead takes the result from a metadata cache.
Current versions of Firebird do not cache results of deterministic functions. Specifying the This is easily demonstrated with an example:
|
SQL Security
The SQL SECURITY
clause specifies the security context for executing other routines or inserting into other tables.
When SQL Security is not specified, the default value of the database is applied at runtime.
The SQL SECURITY
clause can only be specified for PSQL functions, and is not valid for functions defined in a package.
See also SQL Security in chapter Security.
Variable, Cursor and Subroutine Declarations
The optional declarations section, located at the start of the body of the function definition, defines variables (including cursors) and subroutines local to the function.
Local variable declarations follow the same rules as parameters regarding specification of the data type.
See details in the PSQL chapter for DECLARE VARIABLE
, DECLARE CURSOR
, DECLARE FUNCTION
, and DECLARE PROCEDURE
.
Function Body
The header section is followed by the function body, consisting of one or more PSQL statements enclosed between the outer keywords BEGIN
and END
.
Multiple BEGIN … END
blocks of terminated statements may be embedded inside the procedure body.
External UDR Functions
A stored function can also be located in an external module.
In this case, instead of a function body, the CREATE FUNCTION
specifies the location of the function in the external module using the EXTERNAL
clause.
The optional NAME
clause specifies the name of the external module, the name of the function inside the module, and — optionally — user-defined information.
The required ENGINE
clause specifies the name of the UDR engine that handles communication between Firebird and the external module.
The optional AS
clause accepts a string literal “body”, which can be used by the engine or module for various purposes.
External UDR (User Defined Routine) functions created using UDFs are deprecated, and a legacy from previous Firebird functions. Their capabilities are significantly inferior to the capabilities to the new type of external UDR functions. |
Who Can Create a Function
The CREATE FUNCTION
statement can be executed by:
-
Users with the
CREATE FUNCTION
privilege
The user who created the stored function becomes its owner.
CREATE FUNCTION
Examples
-
Creating a stored function
CREATE FUNCTION ADD_INT (A INT, B INT DEFAULT 0) RETURNS INT AS BEGIN RETURN A + B; END
Calling in a select:
SELECT ADD_INT(2, 3) AS R FROM RDB$DATABASE
Call inside PSQL code, the second optional parameter is not specified:
MY_VAR = ADD_INT(A);
-
Creating a deterministic stored function
CREATE FUNCTION FN_E() RETURNS DOUBLE PRECISION DETERMINISTIC AS BEGIN RETURN EXP(1); END
-
Creating a stored function with table column type parameters
Returns the name of a type by field name and value
CREATE FUNCTION GET_MNEMONIC ( AFIELD_NAME TYPE OF COLUMN RDB$TYPES.RDB$FIELD_NAME, ATYPE TYPE OF COLUMN RDB$TYPES.RDB$TYPE) RETURNS TYPE OF COLUMN RDB$TYPES.RDB$TYPE_NAME AS BEGIN RETURN (SELECT RDB$TYPE_NAME FROM RDB$TYPES WHERE RDB$FIELD_NAME = :AFIELD_NAME AND RDB$TYPE = :ATYPE); END
-
Creating an external stored function
Create a function located in an external module (UDR). Function implementation is located in the external module
udrcpp_example
. The name of the function inside the module iswait_event
.CREATE FUNCTION wait_event ( event_name varchar (31) CHARACTER SET ascii ) RETURNS INTEGER EXTERNAL NAME 'udrcpp_example!Wait_event' ENGINE udr
-
Creating a stored function containing a sub-function
Creating a function to convert a number to hexadecimal format.
CREATE FUNCTION INT_TO_HEX ( ANumber BIGINT , AByte_Per_Number SMALLINT = 8) RETURNS CHAR (66) AS DECLARE VARIABLE xMod SMALLINT ; DECLARE VARIABLE xResult VARCHAR (64); DECLARE FUNCTION TO_HEX (ANum SMALLINT ) RETURNS CHAR AS BEGIN RETURN CASE ANum WHEN 0 THEN '0' WHEN 1 THEN '1' WHEN 2 THEN '2' WHEN 3 THEN '3' WHEN 4 THEN '4' WHEN 5 THEN '5' WHEN 6 THEN '6' WHEN 7 THEN '7' WHEN 8 THEN '8' WHEN 9 THEN '9' WHEN 10 THEN 'A' WHEN 11 THEN 'B' WHEN 12 THEN 'C' WHEN 13 THEN 'D' WHEN 14 THEN 'E' WHEN 15 THEN 'F' ELSE NULL END; END BEGIN xMod = MOD (ANumber, 16); ANumber = ANumber / 16; xResult = TO_HEX (xMod); WHILE (ANUMBER> 0) DO BEGIN xMod = MOD (ANumber, 16); ANumber = ANumber / 16; xResult = TO_HEX (xMod) || xResult; END RETURN '0x' || LPAD (xResult, AByte_Per_Number * 2, '0' ); END
-
With
DEFINER
set for functionf
, userUS
needs only theEXECUTE
privilege onf
. If it were set forINVOKER
, the user would also need theINSERT
privilege on tablet
.set term ^; create function f (i integer) returns int SQL SECURITY DEFINER as begin insert into t values (:i); return i + 1; end^ set term ;^ grant execute on function f to user us; commit; connect 'localhost:/tmp/59.fdb' user us password 'pas'; select f(3) from rdb$database;
5.9.2. ALTER FUNCTION
Alters a stored function
DSQL
ALTER FUNCTION funcname
[ ( [ <in_params> ] ) ]
RETURNS <domain_or_non_array_type> [COLLATE collation]
[DETERMINISTIC]
{<psql_function> | <external-module-body>}
!! See syntax of CREATE FUNCTION
for further rules !!
The ALTER FUNCTION
statement allows the following changes to a stored function definition:
-
the set and characteristics of input and output type
-
local variables, named cursors, and subroutines
-
code in the body of the stored procedure
For external functions (UDR), you can change the entry point and engine name.
For legacy external functions declared using DECLARE EXTERNAL FUNCTION
— also known as UDFs — it is not possible to convert to PSQL and vice versa.
After ALTER FUNCTION
executes, existing privileges remain intact and dependencies are not affected.
Altering a function without specifying the SQL SECURITY
clause will remove the SQL Security property if currently set for this function.
This means the behaviour will revert to the database default.
Take care about changing the number and type of input parameters and the output type of a stored function.
Existing application code and procedures, functions and triggers that call it could become invalid because the new description of the parameters is incompatible with the old calling format.
For information on how to troubleshoot such a situation, see the article The |
Who Can Alter a Function
The ALTER FUNCTION
statement can be executed by:
-
Owner of the stored function
-
Users with the
ALTER ANY FUNCTION
privilege
5.9.3. CREATE OR ALTER FUNCTION
Creates a stored function if it does not exist, or alters a stored function
DSQL
CREATE OR ALTER FUNCTION funcname
[ ( [ <in_params> ] ) ]
RETURNS <domain_or_non_array_type> [COLLATE collation]
[DETERMINISTIC]
{<psql_function> | <external-module-body>}
!! See syntax of CREATE FUNCTION
for further rules !!
The CREATE OR ALTER FUNCTION
statement creates a new stored function or alters an existing one.
If the stored function does not exist, it will be created by invoking a CREATE FUNCTION
statement transparently.
If the function already exists, it will be altered and compiled (through ALTER FUNCTION
) without affecting its existing privileges and dependencies.
5.9.4. DROP FUNCTION
Drops a stored function
DSQL
DROP FUNCTION funcname
Parameter | Description |
---|---|
funcname |
Stored function name. The maximum length is 63 characters. Must be unique among all function names in the database. |
The DROP FUNCTION
statement deletes an existing stored function.
If the stored function has any dependencies, the attempt to delete it will fail, and raise an error.
Who Can Drop a Function
The DROP FUNCTION
statement can be executed by:
-
Owner of the stored function
-
Users with the
DROP ANY FUNCTION
privilege
5.9.5. RECREATE FUNCTION
Drops a stored function if it exists, and creates a stored function
DSQL
RECREATE FUNCTION funcname
[ ( [ <in_params> ] ) ]
RETURNS <domain_or_non_array_type> [COLLATE collation]
[DETERMINISTIC]
{<psql_function> | <external-module-body>}
!! See syntax of CREATE FUNCTION
for further rules !!
The RECREATE FUNCTION
statement creates a new stored function or recreates an existing one.
If there is a function with this name already, the engine will try to drop it and then create a new one.
Recreating an existing function will fail at COMMIT
if the function has dependencies.
Be aware that dependency errors are not detected until the |
After a procedure is successfully recreated, existing privileges to execute the stored function and the privileges of the stored function itself are dropped.
5.10. EXTERNAL FUNCTION
External functions (UDFs) have been aggressively deprecated in Firebird 4.0:
|
External functions, also known as “User-Defined Functions” (UDFs) are programs written in an external programming language and stored in dynamically loaded libraries. Once declared in a database, they become available in dynamic and procedural statements as though they were implemented in the SQL language.
External functions extend the possibilities for processing data with SQL considerably.
To make a function available to a database, it is declared using the statement DECLARE EXTERNAL FUNCTION
.
The library containing a function is loaded when any function included in it is called.
External functions declared as |
External functions may be contained in more than one library — or “module”, as it is referred to in the syntax. |
UDFs are fundamentally insecure.
We recommend avoiding their use whenever possible, and disabling UDFs in your database configuration ( |
5.10.1. DECLARE EXTERNAL FUNCTION
Declares a user-defined function (UDF) in the current database
DSQL, ESQL
DECLARE EXTERNAL FUNCTION funcname [{ <arg_desc_list> | ( <arg_desc_list> ) }] RETURNS { <return_value> | ( <return_value> ) } ENTRY_POINT 'entry_point' MODULE_NAME 'library_name' <arg_desc_list> ::= <arg_type_decl> [, <arg_type_decl> ...] <arg_type_decl> ::= <udf_data_type> [BY {DESCRIPTOR | SCALAR_ARRAY} | NULL] <udf_data_type> ::= <scalar_datatype> | BLOB | CSTRING(length) [ CHARACTER SET charset ] <scalar_datatype> ::= !! See Scalar Data Types Syntax !! <return_value> ::= { <udf_data_type> | PARAMETER param_num } [{ BY VALUE | BY DESCRIPTOR [FREE_IT] | FREE_IT }]
Parameter | Description |
---|---|
funcname |
Function name in the database.
The maximum length is 63 characters.
It should be unique among all internal and external function names in the database and need not be the same name as the name exported from the UDF library via |
entry_point |
The exported name of the function |
library_name |
The name of the module ( |
length |
The maximum length of a null-terminated string, specified in bytes |
charset |
Character set of the CSTRING |
param_num |
The number of the input parameter, numbered from 1 in the list of input parameters in the declaration, describing the data type that will be returned by the function |
The DECLARE EXTERNAL FUNCTION
statement makes a user-defined function available in the database.
UDF declarations must be made in each database that is going to use them.
There is no need to declare UDFs that will never be used.
The name of the external function must be unique among all function names.
It may be different from the exported name of the function, as specified in the ENTRY_POINT
argument.
DECLARE EXTERNAL FUNCTION
Input Parameters
The input parameters of the function follow the name of the function and are separated with commas.
Each parameter has an SQL data type specified for it.
Arrays cannot be used as function parameters.
In addition to the SQL types, the CSTRING
type is available for specifying a null-terminated string with a maximum length of LENGTH
bytes.
There are several mechanisms for passing a parameter from the Firebird engine to an external function, each of these mechanisms will be discussed below.
By default, input parameters are passed by reference. There is no separate clause to explicitly indicate that parameters are passed by reference.
When passing a NULL
value by reference, it is converted to the equivalent of zero, for example, a number ‘0
’ or an empty string (“''
”).
If the keyword NULL
is specified after a parameter, then with passing a NULL
values, the null pointer will be passed to the external function.
Declaring a function with the NULL
keyword does not guarantee that the function will correctly handle a NULL
input parameter.
Any function must be written or rewritten to correctly handle NULL
values.
Always use the function declaration as provided by its developer.
If BY DESCRIPTOR
is specified, then the input parameter is passed by descriptor.
In this case, the UDF parameter will receive a pointer to an internal structure known as a descriptor.
The descriptor contains information about the data type, subtype, precision, character set and collation, scale, a pointer to the data itself and some flags, including the NULL
indicator.
This declaration only works if the external function is written using a handle.
When passing a function parameter by descriptor, the passed value is not cast to the declared data type. |
The BY SCALAR_ARRAY
clause is used when passing arrays as input parameters.
Unlike other types, you cannot return an array from a UDF.
Clauses and Keywords
RETURNS
clause-
(Required) specifies the output parameter returned by the function. A function is scalar, it returns one value (output parameter). The output parameter can be of any SQL type (except an array or an array element) or a null-terminated string (
CSTRING
). The output parameter can be passed by reference (the default), by descriptor or by value. If theBY DESCRIPTOR
clause is specified, the output parameter is passed by descriptor. If theBY VALUE
clause is specified, the output parameter is passed by value. PARAMETER
keyword-
specifies that the function returns the value from the parameter under number param_num. It is necessary if you need to return a value of data type
BLOB
. FREE_IT
keyword-
means that the memory allocated for storing the return value will be freed after the function is executed. It is used only if the memory was allocated dynamically in the UDF. In such a UDF, the memory must be allocated with the help of the
ib_util_malloc
function from theib_util
module, a requirement for compatibility with the functions used in Firebird code and in the code of the shipped UDF modules, for allocating and freeing memory. ENTRY_POINT
clause-
specifies the name of the entry point (the name of the imported function), as exported from the module.
MODULE_NAME
clause-
defines the name of the module where the exported function is located. The link to the module should not be the full path and extension of the file, if that can be avoided. If the module is located in the default location (in the
../UDF
subdirectory of the Firebird server root) or in a location explicitly configured infirebird.conf
, it makes it easier to move the database between different platforms. TheUDFAccess
parameter in the firebird.conf file allows access restrictions to external functions modules to be configured.
Any user connected to the database can declare an external function (UDF).
Who Can Create an External Function
The DECLARE EXTERNAL FUNCTION
statement can be executed by:
-
Users with the
CREATE FUNCTION
privilege
The user who created the function becomes its owner.
Examples using DECLARE EXTERNAL FUNCTION
-
Declaring the
addDay
external function located in thefbudf
module. The input and output parameters are passed by reference.DECLARE EXTERNAL FUNCTION addDay TIMESTAMP, INT RETURNS TIMESTAMP ENTRY_POINT 'addDay' MODULE_NAME 'fbudf';
-
Declaring the
invl
external function located in thefbudf
module. The input and output parameters are passed by descriptor.DECLARE EXTERNAL FUNCTION invl INT BY DESCRIPTOR, INT BY DESCRIPTOR RETURNS INT BY DESCRIPTOR ENTRY_POINT 'idNvl' MODULE_NAME 'fbudf';
-
Declaring the
isLeapYear
external function located in thefbudf
module. The input parameter is passed by reference, while the output parameter is passed by value.DECLARE EXTERNAL FUNCTION isLeapYear TIMESTAMP RETURNS INT BY VALUE ENTRY_POINT 'isLeapYear' MODULE_NAME 'fbudf';
-
Declaring the
i64Truncate
external function located in thefbudf
module. The input and output parameters are passed by descriptor. The second parameter of the function is used as the return value.DECLARE EXTERNAL FUNCTION i64Truncate NUMERIC(18) BY DESCRIPTOR, NUMERIC(18) BY DESCRIPTOR RETURNS PARAMETER 2 ENTRY_POINT 'fbtruncate' MODULE_NAME 'fbudf';
5.10.2. ALTER EXTERNAL FUNCTION
Alters the entry point and/or the module name of a user-defined function (UDF)
DSQL
ALTER EXTERNAL FUNCTION funcname [ENTRY_POINT 'new_entry_point'] [MODULE_NAME 'new_library_name']
Parameter | Description |
---|---|
funcname |
Function name in the database |
new_entry_point |
The new exported name of the function |
new_library_name |
The new name of the module ( |
The ALTER EXTERNAL FUNCTION
statement changes the entry point and/or the module name for a user-defined function (UDF).
Existing dependencies remain intact after the statement containing the change(s) is executed.
- The
ENTRY_POINT
clause -
is for specifying the new entry point (the name of the function as exported from the module).
- The
MODULE_NAME
clause -
is for specifying the new name of the module where the exported function is located.
Any user connected to the database can change the entry point and the module name.
Who Can Alter an External Function
The ALTER EXTERNAL FUNCTION
statement can be executed by:
-
Owner of the external function
-
Users with the
ALTER ANY FUNCTION
privilege
5.10.3. DROP EXTERNAL FUNCTION
Drops a user-defined function (UDF) from the current database
DSQL, ESQL
DROP EXTERNAL FUNCTION funcname
Parameter | Description |
---|---|
funcname |
Function name in the database |
The DROP EXTERNAL FUNCTION
statement deletes the declaration of a user-defined function from the database.
If there are any dependencies on the external function, the statement will fail and raise an error.
Any user connected to the database can delete the declaration of an internal function.
Who Can Drop an External Function
The DROP EXTERNAL FUNCTION
statement can be executed by:
-
Owner of the external function
-
Users with the
DROP ANY FUNCTION
privilege
5.11. PACKAGE
A package is a group of procedures and functions managed as one entity.
5.11.1. CREATE PACKAGE
Creates a package header
DSQL
CREATE PACKAGE package_name [SQL SECURITY {INVOKER | DEFINER}] AS BEGIN [ <package_item> ... ] END <package_item> ::= <function_decl>; | <procedure_decl>; <function_decl> ::= FUNCTION funcname [ ( [ <in_params> ] ) ] RETURNS <domain_or_non_array_type> [COLLATE collation] [DETERMINISTIC] <procedure_decl> ::= PROCEDURE procname [ ( [ <in_params> ] ) ] [RETURNS (<out_params>)] <in_params> ::= <inparam> [, <inparam> ... ] <inparam> ::= <param_decl> [ { = | DEFAULT } <value> ] <out_params> ::= <outparam> [, <outparam> ...] <outparam> ::= <param_decl> <value> ::= { literal | NULL | context_var } <param-decl> ::= paramname <domain_or_non_array_type> [NOT NULL] [COLLATE collation] <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !!
Parameter | Description |
---|---|
package_name |
Package name. The maximum length is 63 characters. The package name must be unique among all package names. |
function_decl |
Function declaration |
procedure_decl |
Procedure declaration |
func_name |
Function name. The maximum length is 63 characters. The function name must be unique within the package. |
proc_name |
Procedure name. The maximum length is 63 characters. The function name must be unique within the package. |
collation |
Collation |
inparam |
Input parameter declaration |
outparam |
Output parameter declaration |
literal |
A literal value that is assignment-compatible with the data type of the parameter |
context_var |
Any context variable that is assignment-compatible with the data type of the parameter |
paramname |
The name of an input parameter of a procedure or function, or an output parameter of a procedure. The maximum length is 63 characters. The name of the parameter must be unique among input and output parameters of the procedure or function. |
The CREATE PACKAGE
statement creates a new package header.
Routines (procedures and functions) declared in the package header are available outside the package using the full identifier (package_name.proc_name or package_name.func_name).
Routines defined only in the package body — but not in the package header — are not visible outside the package.
Package procedure and function names may shadow global routines
If a package header or package body declares a procedure or function with the same name as a stored procedure or function in the global namespace, it is not possible to call that global procedure or function from the package body. In this case, the procedure or function of the package will always be called. For this reason, it is recommended that the names of stored procedures and functions in packages do not overlap with names of stored procedures and functions in the global namespace. |
Statement Terminators
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly some third-party editors — employ an internal convention that requires all statements to be terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the Terminator in isql.
SQL Security
The SQL SECURITY
clause specifies the security context for executing other routines or inserting into other tables from functions or procedures defined in this package.
When SQL Security is not specified, the default value of the database is applied at runtime.
The SQL SECURITY
clause can only be specified for the package, not for individual procedures and functions of the package.
See also SQL Security in chapter Security.
Procedure and Function Parameters
For details on stored procedure parameters, see Parameters in CREATE PROCEDURE
.
For details on function parameters, see Parameters in CREATE FUNCTION
.
Who Can Create a Package
The CREATE PACKAGE
statement can be executed by:
-
Users with the
CREATE PACKAGE
privilege
The user who created the package header becomes its owner.
Examples of CREATE PACKAGE
-
Create a package header
CREATE PACKAGE APP_VAR AS BEGIN FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC; FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC; PROCEDURE SET_DATERANGE(ADATEBEGIN DATE, ADATEEND DATE DEFAULT CURRENT_DATE); END
-
With
DEFINER
set for packagepk
, userUS
needs only theEXECUTE
privilege onpk
. If it were set forINVOKER
, either the user or the package would also need theINSERT
privilege on tablet
.create table t (i integer); set term ^; create package pk SQL SECURITY DEFINER as begin function f(i integer) returns int; end^ create package body pk as begin function f(i integer) returns int as begin insert into t values (:i); return i + 1; end end^ set term ;^ grant execute on package pk to user us; commit; connect 'localhost:/tmp/69.fdb' user us password 'pas'; select pk.f(3) from rdb$database;
5.11.2. ALTER PACKAGE
Alters a package header
DSQL
ALTER PACKAGE package_name
[SQL SECURITY {INVOKER | DEFINER}]
AS
BEGIN
[ <package_item> ... ]
END
!! See syntax of CREATE PACKAGE
for further rules!!
The ALTER PACKAGE
statement modifies the package header.
It can be used to change the number and definition of procedures and functions, including their input and output parameters.
However, the source and compiled form of the package body is retained, though the body might be incompatible after the change to the package header.
The validity of a package body for the defined header is stored in the column RDB$PACKAGES.RDB$VALID_BODY_FLAG
.
Altering a package without specifying the SQL SECURITY
clause will remove the SQL Security property if currently set for this package.
This means the behaviour will revert to the database default.
Who Can Alter a Package
The ALTER PACKAGE
statement can be executed by:
-
The owner of the package
-
Users with the
ALTER ANY PACKAGE
privilege
5.11.3. CREATE OR ALTER PACKAGE
Creates a package header if it does not exist, or alters a package header
DSQL
CREATE OR ALTER PACKAGE package_name
[SQL SECURITY {INVOKER | DEFINER}]
AS
BEGIN
[ <package_item> ... ]
END
!! See syntax of CREATE PACKAGE
for further rules!!
The CREATE OR ALTER PACKAGE
statement creates a new package or modifies an existing package header.
If the package header does not exist, it will be created using CREATE PACKAGE
.
If it already exists, then it will be modified using ALTER PACKAGE
while retaining existing privileges and dependencies.
Examples of CREATE OR ALTER PACKAGE
CREATE OR ALTER PACKAGE APP_VAR
AS
BEGIN
FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC;
FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC;
PROCEDURE SET_DATERANGE(ADATEBEGIN DATE,
ADATEEND DATE DEFAULT CURRENT_DATE);
END
5.11.4. DROP PACKAGE
Drops a package header
DSQL
DROP PACKAGE package_name
Parameter | Description |
---|---|
package_name |
Package name |
The DROP PACKAGE
statement deletes an existing package header.
If a package body exists, it will be dropped together with the package header.
If there are still dependencies on the package, an error will be raised.
Who Can Drop a Package
The DROP PACKAGE
statement can be executed by:
-
The owner of the package
-
Users with the
DROP ANY PACKAGE
privilege
5.11.5. RECREATE PACKAGE
Drops a package header if it exists, and creates a package header
DSQL
RECREATE PACKAGE package_name
[SQL SECURITY {INVOKER | DEFINER}]
AS
BEGIN
[ <package_item> ... ]
END
!! See syntax of CREATE PACKAGE
for further rules!!
The RECREATE PACKAGE
statement creates a new package or recreates an existing package header.
If a package header with the same name already exists, then this statement will first drop it and then create a new package header.
It is not possible to recreate the package header if there are still dependencies on the existing package, or if the body of the package exists.
Existing privileges of the package itself are not preserved, nor are privileges to execute the procedures or functions of the package.
Examples of RECREATE PACKAGE
RECREATE PACKAGE APP_VAR
AS
BEGIN
FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC;
FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC;
PROCEDURE SET_DATERANGE(ADATEBEGIN DATE,
ADATEEND DATE DEFAULT CURRENT_DATE);
END
5.12. PACKAGE BODY
5.12.1. CREATE PACKAGE BODY
Creates a package body
DSQL
CREATE PACKAGE BODY name AS BEGIN [ <package_item> ... ] [ <package_body_item> ... ] END <package_item> ::= !! SeeCREATE PACKAGE
syntax !! <package_body_item> ::= <function_impl> | <procedure_impl> <function_impl> ::= FUNCTION funcname [ ( [ <in_params> ] ) ] RETURNS <domain_or_non_array_type> [COLLATE collation] [DETERMINISTIC] <module-body> <procedure_impl> ::= PROCEDURE procname [ ( [ <in_params> ] ) ] [RETURNS (<out_params>)] <module-body> <module-body> ::= !! See Syntax of Module Body !! <in_params> ::= !! SeeCREATE PACKAGE
syntax !! !! See also Rules below !! <out_params> ::= !! SeeCREATE PACKAGE
syntax !! <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !!
Parameter | Description |
---|---|
package_name |
Package name. The maximum length is 63 characters. The package name must be unique among all package names. |
function_impl |
Function implementation.
Essentially a |
procedure_impl |
Procedure implementation
Essentially a |
func_name |
Function name. The maximum length is 63 characters. The function name must be unique within the package. |
collation |
Collation |
proc_name |
Procedure name. The maximum length is 63 characters. The function name must be unique within the package. |
The CREATE PACKAGE BODY
statement creates a new package body.
The package body can only be created after the package header has been created.
If there is no package header with name package_name, an error is raised.
All procedures and functions declared in the package header must be implemented in the package body. Additional procedures and functions may be defined and implemented in the package body only. Procedure and functions defined in the package body, but not defined in the package header, are not visible outside the package body.
The names of procedures and functions defined in the package body must be unique among the names of procedures and functions defined in the package header and implemented in the package body.
Package procedure and function names may shadow global routines
If a package header or package body declares a procedure or function with the same name as a stored procedure or function in the global namespace, it is not possible to call that global procedure or function from the package body. In this case, the procedure or function of the package will always be called. For this reason, it is recommended that the names of stored procedures and functions in packages do not overlap with names of stored procedures and functions in the global namespace. |
-
In the package body, all procedures and functions must be implemented with the same signature as declared in the header and at the beginning of the package body
-
The default values for procedure or function parameters cannot be overridden (as specified in the package header or in <package_item>). This means default values can only be defined in <package_body_item> for procedures or functions that have not been defined in the package header or earlier in the package body.
UDF declarations ( |
Who Can Create a Package Body
The CREATE PACKAGE BODY
statement can be executed by:
-
The owner of the package
-
Users with the
ALTER ANY PACKAGE
privilege
Examples of CREATE PACKAGE BODY
CREATE PACKAGE BODY APP_VAR
AS
BEGIN
-- Returns the start date of the period
FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC
AS
BEGIN
RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEBEGIN');
END
-- Returns the end date of the period
FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC
AS
BEGIN
RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEEND');
END
-- Sets the date range of the working period
PROCEDURE SET_DATERANGE(ADATEBEGIN DATE, ADATEEND DATE)
AS
BEGIN
RDB$SET_CONTEXT('USER_SESSION', 'DATEBEGIN', ADATEBEGIN);
RDB$SET_CONTEXT('USER_SESSION', 'DATEEND', ADATEEND);
END
END
5.12.2. DROP PACKAGE BODY
Drops a package body
DSQL
DROP PACKAGE package_name
Parameter | Description |
---|---|
package_name |
Package name |
The DROP PACKAGE BODY
statement deletes the package body.
Who Can Drop a Package Body
The DROP PACKAGE BODY
statement can be executed by:
-
The owner of the package
-
Users with the
ALTER ANY PACKAGE
privilege
5.12.3. RECREATE PACKAGE BODY
Drops a package body if it exists, and creates a package body
DSQL
RECREATE PACKAGE BODY name
AS
BEGIN
[ <package_item> ... ]
[ <package_body_item> ... ]
END
!! See syntax of CREATE PACKAGE BODY
for further rules !!
The RECREATE PACKAGE BODY
statement creates a new or recreates an existing package body.
If a package body with the same name already exists, the statement will try to drop it and then create a new package body.
After recreating the package body, privileges of the package and its routines are preserved.
See CREATE PACKAGE BODY
for more details.
Examples of RECREATE PACKAGE BODY
RECREATE PACKAGE BODY APP_VAR
AS
BEGIN
-- Returns the start date of the period
FUNCTION GET_DATEBEGIN() RETURNS DATE DETERMINISTIC
AS
BEGIN
RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEBEGIN');
END
-- Returns the end date of the period
FUNCTION GET_DATEEND() RETURNS DATE DETERMINISTIC
AS
BEGIN
RETURN RDB$GET_CONTEXT('USER_SESSION', 'DATEEND');
END
-- Sets the date range of the working period
PROCEDURE SET_DATERANGE(ADATEBEGIN DATE, ADATEEND DATE)
AS
BEGIN
RDB$SET_CONTEXT('USER_SESSION', 'DATEBEGIN', ADATEBEGIN);
RDB$SET_CONTEXT('USER_SESSION', 'DATEEND', ADATEEND);
END
END
5.13. FILTER
A BLOB FILTER
is a database object that is a special type of external function, with the sole purpose of taking a BLOB
object in one format and converting it to a BLOB
object in another format.
The formats of the BLOB
objects are specified with user-defined BLOB
subtypes.
External functions for converting BLOB
types are stored in dynamic libraries and loaded when necessary.
For more details on BLOB
subtypes, see Binary Data Types.
5.13.1. DECLARE FILTER
Declares a BLOB
filter in the current database
DSQL, ESQL
DECLARE FILTER filtername INPUT_TYPE <sub_type> OUTPUT_TYPE <sub_type> ENTRY_POINT 'function_name' MODULE_NAME 'library_name' <sub_type> ::= number | <mnemonic> <mnemonic> ::= BINARY | TEXT | BLR | ACL | RANGES | SUMMARY | FORMAT | TRANSACTION_DESCRIPTION | EXTERNAL_FILE_DESCRIPTION | user_defined
Parameter | Description |
---|---|
filtername |
Filter name in the database.
The maximum length is 63 characters.
It need not be the same name as the name exported from the filter library via |
sub_type |
|
number |
|
mnemonic |
|
function_name |
The exported name (entry point) of the function |
library_name |
The name of the module where the filter is located |
user_defined |
User-defined |
The DECLARE FILTER
statement makes a BLOB
filter available to the database.
The name of the BLOB
filter must be unique among the names of BLOB
filters.
Specifying the Subtypes
The subtypes can be specified as the subtype number or as the subtype mnemonic name.
Custom subtypes must be represented by negative numbers (from -1 to -32,768), or their user-defined name from the RDB$TYPES
table.
An attempt to declare more than one BLOB
filter with the same combination of the input and output types will fail with an error.
INPUT_TYPE
-
clause defining the
BLOB
subtype of the object to be converted OUTPUT_TYPE
-
clause defining the
BLOB
subtype of the object to be created.
Mnemonic names can be defined for custom
After the transaction is committed, the mnemonic names can be used in declarations when you create new filters. The value of the column Warning
In general, the system tables are not writable by users.
However, inserting custom types into |
Parameters
ENTRY_POINT
-
clause defining the name of the entry point (the name of the imported function) in the module.
MODULE_NAME
-
The clause defining the name of the module where the exported function is located. By default, modules must be located in the UDF folder of the root directory on the server. The
UDFAccess
parameter infirebird.conf
allows editing of access restrictions to filter libraries.
Any user connected to the database can declare a BLOB filter.
Who Can Create a BLOB
Filter?
The DECLARE FILTER
statement can be executed by:
-
Users with the
CREATE FILTER
privilege
The user executing the DECLARE FILTER
statement becomes the owner of the filter.
Examples of DECLARE FILTER
-
Creating a
BLOB
filter using subtype numbers.DECLARE FILTER DESC_FILTER INPUT_TYPE 1 OUTPUT_TYPE -4 ENTRY_POINT 'desc_filter' MODULE_NAME 'FILTERLIB';
-
Creating a
BLOB
filter using subtype mnemonic names.DECLARE FILTER FUNNEL INPUT_TYPE blr OUTPUT_TYPE text ENTRY_POINT 'blr2asc' MODULE_NAME 'myfilterlib';
5.13.2. DROP FILTER
Drops a BLOB
filter declaration from the current database
DSQL, ESQL
DROP FILTER filtername
Parameter | Description |
---|---|
filtername |
Filter name in the database |
The DROP FILTER
statement removes the declaration of a BLOB
filter from the database.
Removing a BLOB
filter from a database makes it unavailable for use from that database.
The dynamic library where the conversion function is located remains intact and the removal from one database does not affect other databases in which the same BLOB
filter is still declared.
5.14. SEQUENCE
(GENERATOR
)
A sequence — or generator — is a database object used to get unique number values to fill a series. “Sequence” is the SQL-compliant term for the same thing which — in Firebird — has traditionally been known as “generator”. Firebird has syntax for both terms.
Sequences are stored as 64-bit integers, regardless of the SQL dialect of the database.
If a client is connected using Dialect 1, the server handles sequence values as 32-bit integers. Passing a sequence value to a 32-bit field or variable will not cause errors as long as the current value of the sequence does not exceed the limits of a 32-bit number. However, as soon as the sequence value exceeds this limit, a database in Dialect 3 will produce an error. A database in Dialect 1 will truncate (overflow) the value, which could compromise the uniqueness of the series. |
This section describes how to create, alter, set and drop sequences.
5.14.1. CREATE SEQUENCE
Creates a sequence
DSQL, ESQL
CREATE {SEQUENCE | GENERATOR} seq_name [START WITH start_value] [INCREMENT [BY] increment]
Parameter | Description |
---|---|
seq_name |
Sequence name. The maximum length is 63 characters |
start_value |
First value produced by |
increment |
Increment of the sequence when using |
When a sequence is created, its current value is set so that the next value produced by NEXT VALUE FOR seq_name
is equal to start_value.
In other words, the current value of the sequence is set to (start_value - increment
).
The optional INCREMENT [BY]
clause allows you to specify a non-zero increment for the NEXT VALUE FOR seq_name
expression.
The GEN_ID(seq_name, step)
function can be called instead, to “step” the sequence by a different increment.
The increment specified through INCREMENT [BY]
is not used by GEN_ID
.
Using both NEXT VALUE FOR
and GEN_ID
, especially when the sequence has an increment other than 1
, may result in values you did not expect.
For example, if you execute CREATE SEQUENCE x START WITH 10 INCREMENT BY 10
, and then use GEN_ID(x, 1)
, the value returned is 1
, and if you then call NEXT VALUE FOR x
, you get 11
.
Non-standard behaviour for negative increments
The SQL standard specifies that sequences with a negative increment should start at the maximum value of the sequence (263 - 1) and count down.
Firebird does not do that, and instead starts at This may change in a future Firebird version. |
The statements CREATE SEQUENCE
and CREATE GENERATOR
are synonymous — both create a new sequence.
Either can be used, but CREATE SEQUENCE
is recommended as that is the syntax defined in the SQL standard.
Who Can Create a Sequence?
The CREATE SEQUENCE
(CREATE GENERATOR
) statement can be executed by:
-
Users with the
CREATE SEQUENCE
(CREATE GENERATOR
) privilege
The user executing CREATE SEQUENCE
(CREATE GENERATOR
) becomes its owner.
Examples of CREATE SEQUENCE
-
Creating the
EMP_NO_GEN
sequence usingCREATE SEQUENCE
.CREATE SEQUENCE EMP_NO_GEN;
-
Creating the
EMP_NO_GEN
sequence usingCREATE GENERATOR
.CREATE GENERATOR EMP_NO_GEN;
-
Creating the
EMP_NO_GEN
sequence with an initial value of 5 and an increment of 1.CREATE SEQUENCE EMP_NO_GEN START WITH 5;
-
Creating the
EMP_NO_GEN
sequence with an initial value of 1 and an increment of 10.CREATE SEQUENCE EMP_NO_GEN INCREMENT BY 10;
-
Creating the
EMP_NO_GEN
sequence with an initial value of 5 and an increment of 10.CREATE SEQUENCE EMP_NO_GEN START WITH 5 INCREMENT BY 10;
5.14.2. ALTER SEQUENCE
Sets the next value of a sequence, or changes its increment
DSQL
ALTER {SEQUENCE | GENERATOR} seq_name [RESTART [WITH start_value]] [INCREMENT [BY] increment]
Parameter | Description |
---|---|
seq_name |
Sequence name |
start_value |
Next value produced by |
increment |
Increment of the sequence (when using |
The ALTER SEQUENCE
statement sets the next value of the sequence, and/or changes the increment of the sequence.
The RESTART WITH start_value
clause sets the current value of the sequence so that the next value obtained from NEXT VALUE FOR seq_name
is equal to start_value.
To achieve this, the current value of the sequence is set to (start_value - increment
) with increment either as specified in the statement, or from the metadata of the sequence.
The RESTART
clause without WITH start_value
behaves as if WITH start_value
is specified with the start value from the metadata of the sequence.
Contrary to Firebird 3.0, since Firebird 4.0 It is currently not possible to change the start value stored in the metadata. |
Incorrect use of |
INCREMENT [BY]
allows you to change the sequence increment for the NEXT VALUE FOR
expression.
Changing the increment value takes effect for all queries that run after the transaction commits.
Procedures that are called for the first time after changing the commit, will use the new value if they use NEXT VALUE FOR
.
Procedures that were already cached in the metadata cache will continue to use the old increment.
You may need to close all connections to the database for the metadata cache to clear, and the new increment to be used.
Procedures using NEXT VALUE FOR
do not need to be recompiled to see the new increment.
Procedures using GEN_ID(gen, expression)
are not affected when the increment is changed.
Who Can Alter a Sequence?
The ALTER SEQUENCE
(ALTER GENERATOR
) statement can be executed by:
-
The owner of the sequence
-
Users with the
ALTER ANY SEQUENCE
(ALTER ANY GENERATOR
) privilege
Examples of ALTER SEQUENCE
-
Setting the value of the
EMP_NO_GEN
sequence so the next value is 145.ALTER SEQUENCE EMP_NO_GEN RESTART WITH 145;
-
Resetting the sequence
EMP_NO_GEN
to the start value stored in the metadataALTER SEQUENCE EMP_NO_GEN RESTART;
-
Changing the increment of sequence
EMP_NO_GEN
to 10ALTER SEQUENCE EMP_NO_GEN INCREMENT BY 10;
5.14.3. CREATE OR ALTER SEQUENCE
Creates a sequence if it doesn’t exist, or alters a sequence
DSQL, ESQL
CREATE OR ALTER {SEQUENCE | GENERATOR} seq_name {RESTART | START WITH start_value} [INCREMENT [BY] increment]
Parameter | Description |
---|---|
seq_name |
Sequence name. The maximum length is 63 characters |
start_value |
First or next value produced by |
increment |
Increment of the sequence when using |
If the sequence does not exist, it will be created as documented under CREATE SEQUENCE
.
An existing sequence will be changed:
-
If
RESTART
is specified, the sequence is restarted with the start value stored in the metadata -
If the
START WITH
clause is specified, the sequence is restarted with start_value, but the start_value is not stored. In other words, it behaves asRESTART WITH
inALTER SEQUENCE
. -
If the
INCREMENT [BY]
clause is specified, increment is stored as the increment in the metadata, and used for subsequent calls toNEXT VALUE FOR
5.14.4. DROP SEQUENCE
Drops a sequence
DSQL, ESQL
DROP {SEQUENCE | GENERATOR} seq_name
Parameter | Description |
---|---|
seq_name |
Sequence name. The maximum length is 63 characters |
The statements DROP SEQUENCE
and DROP GENERATOR
are equivalent: both drop (delete) an existing sequence.
Either is valid but DROP SEQUENCE
, being defined in the SQL standard, is recommended.
The statements will fail if the sequence has dependencies.
Who Can Drop a Sequence?
The DROP SEQUENCE
(DROP GENERATOR
) statement can be executed by:
-
The owner of the sequence
-
Users with the
DROP ANY SEQUENCE
(DROP ANY GENERATOR
) privilege
5.14.5. RECREATE SEQUENCE
Drops a sequence if it exists, and creates a sequence
DSQL, ESQL
RECREATE {SEQUENCE | GENERATOR} seq_name [START WITH start_value] [INCREMENT [BY] increment]
Parameter | Description |
---|---|
seq_name |
Sequence name. The maximum length is 63 characters |
start_value |
First value produced by |
increment |
Increment of the sequence (when using |
See CREATE SEQUENCE
for the full syntax of CREATE SEQUENCE
and descriptions of defining a sequences and its options.
RECREATE SEQUENCE
creates or recreates a sequence.
If a sequence with this name already exists, the RECREATE SEQUENCE
statement will try to drop it and create a new one.
Existing dependencies will prevent the statement from executing.
5.14.6. SET GENERATOR
Sets the current value of a sequence
DSQL, ESQL
SET GENERATOR seq_name TO new_val
Parameter | Description |
---|---|
seq_name |
Sequence name |
new_val |
New sequence value. A 64-bit integer from -2-63 to 263-1. |
The SET GENERATOR
statement sets the current value of a sequence to the specified value.
Although |
Who Can Use a SET GENERATOR
?
The SET GENERATOR
statement can be executed by:
-
The owner of the sequence
-
Users with the
ALTER ANY SEQUENCE
(ALTER ANY GENERATOR
) privilege
Example of SET GENERATOR
EMP_NO_GEN
sequence to 145:SET GENERATOR EMP_NO_GEN TO 145;
Similar effects can be achieved with ALTER SEQUENCE EMP_NO_GEN RESTART WITH 145 + increment; Here, the value of increment is the current increment of the sequence.
We need add it as |
5.15. EXCEPTION
This section describes how to create, modify and delete custom exceptions for use in error handlers in PSQL modules.
5.15.1. CREATE EXCEPTION
Creates a custom exception for use in PSQL modules
DSQL, ESQL
CREATE EXCEPTION exception_name '<message>' <message> ::= <message-part> [<message-part> ...] <message-part> ::= <text> | @<slot> <slot> ::= one of 1..9
Parameter | Description |
---|---|
exception_name |
Exception name. The maximum length is 63 characters |
message |
Default error message. The maximum length is 1,021 characters |
text |
Text of any character |
slot |
Slot number of a parameter. Numbering starts at 1. Maximum slot number is 9. |
The statement CREATE EXCEPTION
creates a new exception for use in PSQL modules.
If an exception with the same name exists, the statement will raise an error.
The exception name is an identifier, see Identifiers for more information.
The default message is stored in character set NONE
, i.e. in characters of any single-byte character set.
The text can be overridden in the PSQL code when the exception is thrown.
The error message may contain “parameter slots” that can be filled when raising the exception.
If the message contains a parameter slot number that is greater than 9, the second and subsequent digits will be treated as literal text.
For example |
Custom exceptions are stored in the system table RDB$EXCEPTIONS
.
Who Can Create an Exception
The CREATE EXCEPTION
statement can be executed by:
-
Users with the
CREATE EXCEPTION
privilege
The user executing the CREATE EXCEPTION
statement becomes the owner of the exception.
5.15.2. ALTER EXCEPTION
Alters the default message of a custom exception
DSQL, ESQL
ALTER EXCEPTION exception_name '<message>'
!! See syntax of CREATE EXCEPTION
for further rules !!
Who Can Alter an Exception
The ALTER EXCEPTION
statement can be executed by:
-
The owner of the exception
-
Users with the
ALTER ANY EXCEPTION
privilege
5.15.3. CREATE OR ALTER EXCEPTION
Creates a custom exception if it doesn’t exist, or alters a custom exception
DSQL
CREATE OR ALTER EXCEPTION exception_name '<message>'
!! See syntax of CREATE EXCEPTION
for further rules !!
The statement CREATE OR ALTER EXCEPTION
is used to create the specified exception if it does not exist, or to modify the text of the error message returned from it if it exists already.
If an existing exception is altered by this statement, any existing dependencies will remain intact.
5.15.4. DROP EXCEPTION
Drops a custom exception
DSQL, ESQL
DROP EXCEPTION exception_name
Parameter | Description |
---|---|
exception_name |
Exception name |
The statement DROP EXCEPTION
is used to delete an exception.
Any dependencies on the exception will cause the statement to fail, and the exception will not be deleted.
Who Can Drop an Exception
The DROP EXCEPTION
statement can be executed by:
-
The owner of the exception
-
Users with the
DROP ANY EXCEPTION
privilege
5.15.5. RECREATE EXCEPTION
Drops a custom exception if it exists, and creates a custom exception
DSQL
RECREATE EXCEPTION exception_name '<message>'
!! See syntax of CREATE EXCEPTION
for further rules !!
The statement RECREATE EXCEPTION
creates a new exception for use in PSQL modules.
If an exception with the same name exists already, the RECREATE EXCEPTION
statement will try to drop it and create a new one.
If there are any dependencies on the existing exception, the attempted deletion fails and RECREATE EXCEPTION
is not executed.
5.16. COLLATION
In SQL, text strings are sortable objects.
This means that they obey ordering rules, such as alphabetical order.
Comparison operations can be applied to such text strings (for example, “less than” or “greater than”), where the comparison must apply a certain sort order or collation.
For example, the expression “'a' < 'b'
” means that ‘'a'
’ precedes ‘'b'
’ in the collation.
The expression “'c' > 'b'
” means that ‘'c'
’ follows ‘'b'
’ in the collation.
Text strings of more than one character are sorted using sequential character comparisons: first the first characters of the two strings are compared, then the second characters, and so on, until a difference is found between the two strings.
This difference defines the sort order.
A COLLATION
is the schema object that defines a collation (or sort order).
5.16.1. CREATE COLLATION
Defines a new collation for a character set
DSQL
CREATE COLLATION collname FOR charset [FROM {basecoll | EXTERNAL ('extname')}] [NO PAD | PAD SPACE] [CASE [IN]SENSITIVE] [ACCENT [IN]SENSITIVE] ['<specific-attributes>'] <specific-attributes> ::= <attribute> [; <attribute> ...] <attribute> ::= attrname=attrvalue
Parameter | Description |
---|---|
collname |
The name to use for the new collation. The maximum length is 63 characters |
charset |
A character set present in the database |
basecoll |
A collation already present in the database |
extname |
The collation name used in the |
The CREATE COLLATION
statement does not “create” anything, its purpose is to make a collation known to a database.
The collation must already be present on the system, typically in a library file, and must be properly registered in a .conf
file in the intl
subdirectory of the Firebird installation.
The collation may alternatively be based on one that is already present in the database.
How the Engine Detects the Collation
The optional FROM
clause specifies the base collation that is used to derive a new collation.
This collation must already be present in the database.
If the keyword EXTERNAL
is specified, then Firebird will scan the .conf
files in $fbroot/intl/
, where extname must exactly match the name in the configuration file (case-sensitive).
If no FROM
clause is present, Firebird will scan the .conf
file(s) in the intl
subdirectory for a collation with the collation name specified in CREATE COLLATION
.
In other words, omitting the FROM basecoll
clause is equivalent to specifying FROM EXTERNAL ('collname')
.
The — single-quoted — extname is case-sensitive and must correspond exactly with the collation name in the .conf
file.
The collname, charset and basecoll parameters are case-insensitive unless enclosed in double-quotes.
When creating a collation, you can specify whether trailing spaces are included in the comparison.
If the NO PAD
clause is specified, trailing spaces are taken into account in the comparison.
If the PAD SPACE
clause is specified, trailing spaces are ignored in the comparison.
The optional CASE
clause allows you to specify whether the comparison is case-sensitive or case-insensitive.
The optional ACCENT
clause allows you to specify whether the comparison is accent-sensitive or accent-insensitive (e.g. if ‘'e'
’ and ‘'é'
’ are considered equal or unequal).
Specific Attributes
The CREATE COLLATION
statement can also include specific attributes to configure the collation.
The available specific attributes are listed in the table below.
Not all specific attributes apply to every collation.
If the attribute is not applicable to the collation, but is specified when creating it, it will not cause an error.
Specific attribute names are case-sensitive. |
In the table, “1 bpc” indicates that an attribute is valid for collations of character sets using 1 byte per character (so-called narrow character sets), and “UNI” for “Unicode collations”.
Atrribute | Values | Valid for | Comment |
---|---|---|---|
|
|
1 bpc, UNI |
Disables compressions (a.k.a. contractions). Compressions cause certain character sequences to be sorted as atomic units, e.g. Spanish c+h as a single character ch |
|
|
1 bpc |
Disables expansions. Expansions cause certain characters (e.g. ligatures or umlauted vowels) to be treated as character sequences and sorted accordingly |
|
default or M.m |
UNI |
Specifies the ICU library version to use.
Valid values are the ones defined in the applicable |
|
xx_YY |
UNI |
Specifies the collation locale. Requires complete version of ICU libraries. Format: a locale string like “du_NL” (unquoted) |
|
|
1 bpc |
Uses more than one ordering level |
|
|
UNI |
Treats contiguous groups of decimal digits in the string as atomic units and sorts them numerically. (This is also known as natural sorting) |
|
|
1 bpc |
Orders special characters (spaces, symbols etc.) before alphanumeric characters |
If you want to add a new character set with its default collation into your database, declare and run the stored procedure In order for this to work, the character set must be present on the system and registered in a |
Who Can Create a Collation
The CREATE COLLATION
statement can be executed by:
-
Users with the
CREATE COLLATION
privilege
The user executing the CREATE COLLATION
statement becomes the owner of the collation.
Examples using CREATE COLLATION
-
Creating a collation using the name found in the
fbintl.conf
file (case-sensitive)CREATE COLLATION ISO8859_1_UNICODE FOR ISO8859_1;
-
Creating a collation using a special (user-defined) name (the “external” name must match the name in the
fbintl.conf
file)CREATE COLLATION LAT_UNI FOR ISO8859_1 FROM EXTERNAL ('ISO8859_1_UNICODE');
-
Creating a case-insensitive collation based on one already existing in the database
CREATE COLLATION ES_ES_NOPAD_CI FOR ISO8859_1 FROM ES_ES NO PAD CASE INSENSITIVE;
-
Creating a case-insensitive collation based on one already existing in the database with specific attributes
CREATE COLLATION ES_ES_CI_COMPR FOR ISO8859_1 FROM ES_ES CASE INSENSITIVE 'DISABLE-COMPRESSIONS=0';
-
Creating a case-insensitive collation by the value of numbers (the so-called natural collation)
CREATE COLLATION nums_coll FOR UTF8 FROM UNICODE CASE INSENSITIVE 'NUMERIC-SORT=1'; CREATE DOMAIN dm_nums AS varchar(20) CHARACTER SET UTF8 COLLATE nums_coll; -- original (manufacturer) numbers CREATE TABLE wares(id int primary key, articul dm_nums ...);
5.16.2. DROP COLLATION
Drops a collation from the database
DSQL
DROP COLLATION collname
Parameter | Description |
---|---|
collname |
The name of the collation |
The DROP COLLATION
statement removes the specified collation from the database, if it exists.
An error will be raised if the specified collation is not present.
If you want to remove an entire character set with all its collations from the database, declare and execute the stored procedure |
Who Can Drop a Collation
The Drop COLLATION
statement can be executed by:
-
The owner of the collation
-
Users with the
DROP ANY COLLATION
privilege
5.17. CHARACTER SET
5.17.1. ALTER CHARACTER SET
Sets the default collation of a character set
DSQL
ALTER CHARACTER SET charset SET DEFAULT COLLATION collation
Parameter | Description |
---|---|
charset |
Character set identifier |
collation |
The name of the collation |
This will affect the future usage of the character set, except for cases where the COLLATE
clause is explicitly overridden.
In that case, the collation of existing domains, columns and PSQL variables will remain intact after the change to the default collation of the underlying character set.
If you change the default collation for the database character set (the one defined when the database was created), it will change the default collation for the database. If you change the default collation for the character set that was specified during the connection, string constants will be interpreted according to the new collation value, except in those cases where the character set and/or the collation have been overridden. |
5.18. Comments
Database objects and a database itself may be annotated with comments.
It is a convenient mechanism for documenting the development and maintenance of a database.
Comments created with COMMENT ON
will survive a gbak backup and restore.
5.18.1. COMMENT ON
Adds a comment to a metadata object
DSQL
COMMENT ON <object> IS {'sometext' | NULL} <object> ::= {DATABASE | SCHEMA} | <basic-type> objectname | USER username [USING PLUGIN pluginname] | COLUMN relationname.fieldname | [{PROCEDURE | FUNCTION}] PARAMETER [packagename.]routinename.paramname | {PROCEDURE | [EXTERNAL] FUNCTION} [package_name.]routinename | [GLOBAL] MAPPING mappingname <basic-type> ::= CHARACTER SET | COLLATION | DOMAIN | EXCEPTION | FILTER | GENERATOR | INDEX | PACKAGE | ROLE | SEQUENCE | TABLE | TRIGGER | VIEW
Parameter | Description |
---|---|
sometext |
Comment text |
basic-type |
Metadata object type |
objectname |
Metadata object name |
username |
Username |
pluginname |
User manager plugin name |
relationname |
Name of table or view |
fieldname |
Name of the column |
package_name |
Name of the package |
routinename |
Name of stored procedure or function |
paramname |
Name of a stored procedure or function parameter |
mappingname |
Name of a mapping |
The COMMENT ON
statement adds comments for database objects (metadata).
Comments are saved to the RDB$DESCRIPTION
column of the corresponding system tables.
Client applications can view comments from these fields.
|
Comments on users are visible to that user through the |
Who Can Add a Comment
The COMMENT ON
statement can be executed by:
-
The owner of the object that is commented on
-
Users with the
ALTER ANY object_type
privilege, where object_type is the type of object commented on (e.g.PROCEDURE
)
Examples using COMMENT ON
-
Adding a comment for the current database
COMMENT ON DATABASE IS 'It is a test (''my.fdb'') database';
-
Adding a comment for the
METALS
tableCOMMENT ON TABLE METALS IS 'Metal directory';
-
Adding a comment for the
ISALLOY
field in theMETALS
tableCOMMENT ON COLUMN METALS.ISALLOY IS '0 = fine metal, 1 = alloy';
-
Adding a comment for a parameter
COMMENT ON PARAMETER ADD_EMP_PROJ.EMP_NO IS 'Employee ID';
-
Adding a comment for a package, its procedures and functions, and their parameters
COMMENT ON PACKAGE APP_VAR IS 'Application Variables'; COMMENT ON FUNCTION APP_VAR.GET_DATEBEGIN IS 'Returns the start date of the period'; COMMENT ON PROCEDURE APP_VAR.SET_DATERANGE IS 'Set date range'; COMMENT ON PROCEDURE PARAMETER APP_VAR.SET_DATERANGE.ADATEBEGIN IS 'Start Date';
6. Data Manipulation (DML) Statements
DML — data manipulation language — is the subset of SQL that is used by applications and procedural modules to extract and change data.
Extraction, for the purpose of reading data, both raw and manipulated, is achieved with the SELECT
statement.
INSERT
is for adding new data and DELETE
is for erasing data that is no longer required.
UPDATE
, MERGE
and UPDATE OR INSERT
all modify data in various ways.
6.1. SELECT
“Queries” or retrieves data from the database
[WITH [RECURSIVE] <cte> [, <cte> ...]] SELECT [FIRST <limit-expression>] [SKIP <limit-expression>] [{ALL | DISTINCT}] <select-list> FROM <table-reference> [, <table-reference> ...] [WHERE <search-condition>] [GROUP BY <value-expression> [, <value-expression> ...] [HAVING <search-condition>]] [WINDOW <window_definition> [, <window_definition> ...] [PLAN <plan-expression>] [UNION [{DISTINCT | ALL}] <query-term>] [ORDER BY <sort-specification [, <sort-specification> ...]] [{ ROWS <value-expression> [TO <value-expression>] | [OFFSET <offset-fetch-expression> {ROW | ROWS}] [FETCH {FIRST | NEXT} [<offset-fetch-expression>] {ROW | ROWS} ONLY] }] [FOR UPDATE [OF <column-name-list>]] [WITH LOCK [SKIP LOCKED]] [OPTIMIZE FOR {FIRST | ALL} ROWS] [INTO <variable-list>] <variable-list> ::= <variable> [, <variable> ...] <variable> ::= varname | ':' varname
The above syntax is not the full |
The SELECT
statement retrieves data from the database and hands them to the application or the enclosing SQL statement.
Data is returned in zero or more rows, each containing one or more columns or fields.
The total of rows returned is the result set of the statement.
The only mandatory parts of the SELECT
statement are:
-
The
SELECT
keyword, followed by a select list. This part specifies what you want to retrieve. -
The
FROM
keyword, followed by a selectable object. This tells the engine where you want to get it from.
In its most basic form, SELECT
retrieves a number of columns from a single table or view, like this:
select id, name, address
from contacts
Or, to retrieve all the columns:
select * from sales
In practice, a SELECT
statement is usually executed with a WHERE
clause, which limits the rows returned.
The result set may be sorted by an ORDER BY
clause, and FIRST … SKIP
, OFFSET … FETCH
or ROWS
may further limit the number of returned rows, and can — for example — be used for pagination.
The column list may contain all kinds of expressions, not only column names, and the source need not be a table or view: it may also be a derived table, a common table expression (CTE) or a selectable stored procedure.
Multiple sources may be combined in a JOIN
, and multiple result sets may be combined in a UNION
.
The following sections discuss the available SELECT
subclauses and their usage in detail.
6.1.1. FIRST
, SKIP
Retrieves a slice of rows from an ordered set
SELECT [FIRST <limit-expression>] [SKIP <limit-expression>] FROM ... ... <limit-expression> ::= <integer-literal> | <query-parameter> | (<value-expression>)
Argument | Description |
---|---|
integer-literal |
Integer literal |
query-parameter |
Query parameter place-holder.
|
value-expression |
A valid expression (returning an integer value) |
FIRST and SKIP are non-standard syntax
|
FIRST m
limits the output of a query to the first m rows.
SKIP n
will skip the first n rows of the result set before returning rows.
FIRST
and SKIP
are both optional.
When used together as in “FIRST m SKIP n
”, the n topmost rows of the result set are discarded, and the first m rows of the rest of the set are returned.
Characteristics of FIRST
and SKIP
-
Any argument to
FIRST
andSKIP
that is not an integer literal or an SQL parameter must be enclosed in parentheses. This implies that a subquery expression must be enclosed in two pairs of parentheses. -
SKIP 0
is allowed but pointless. -
FIRST 0
is also allowed and returns an empty set. -
Negative
SKIP
and/orFIRST
values result in an error. -
If a
SKIP
lands past the end of the result set, an empty set is returned. -
If the number of rows in the result set (or the remainder left after a
SKIP
) is less than the value of the m argument supplied forFIRST
, that smaller number of rows is returned. These are valid results, not error conditions.
Examples of FIRST/SKIP
-
The following query will return the first 10 names from the
People
table:select first 10 id, name from People order by name asc
-
The following query will return everything but the first 10 names:
select skip 10 id, name from People order by name asc
-
And this one returns the last 10 rows. Notice the double parentheses:
select skip ((select count(*) - 10 from People)) id, name from People order by name asc
-
This query returns rows 81 to 100 of the People table:
select first 20 skip 80 id, name from People order by name asc
6.1.2. The SELECT
Columns List
The columns list contains one or more comma-separated value expressions.
Each expression provides a value for one output column.
Alternatively, *
(“star” or “all”) can be used to stand for all the columns of all relations in the FROM
clause.
SELECT [...] [{ ALL | DISTINCT }] <select-list> [...] FROM ... <select_list> ::= * | <select-sublist> [, <select-sublist> ...] <select-sublist> ::= table-alias.* | <value-expression> [[AS] column-alias] <value-expression> ::= [table-alias.]col_name | [table-alias.]selectable_SP_outparm | <literal> | <context-variable> | <function-call> | <single-value-subselect> | <CASE-construct> | any other expression returning a single value of a Firebird data type or NULL <function-call> ::= <normal-function> | <aggregate-function> | <window-function> <normal-function> ::= !! See Built-in Scalar Functions !! <aggregate-function> ::= !! See Aggregate Functions !! <window-function> ::= !! See Window Functions !!
Argument | Description |
---|---|
table-alias |
Name of relation (view, stored procedure, derived table), or its alias |
col_name |
Name of a table or view column, or its alias |
selectable_SP_outparm |
Declared name of an output parameter of a selectable stored procedure |
literal |
A literal |
context-variable |
Context variable |
function-call |
Scalar, aggregate, or window function expression |
single-value-subselect |
A subquery returning one scalar value (singleton) |
CASE-construct |
CASE construct setting conditions for a return value |
It is always valid to qualify a column name (or “*
”) with the name or alias of the table, view or selectable SP to which it belongs, followed by a dot (‘.
’).
For example, relationname.columnname
, relationname.*
, alias.columnname
, alias.*
.
Qualifying is required if the column name occurs in more than one relation taking part in a join.
Qualifying “*
” is required if it is not the only item in the column list.
Aliases hide the original relation name: once a table, view or procedure has been aliased, only the alias can be used as its qualifier throughout the query. The relation name itself becomes unavailable. |
The column list may optionally be preceded by one of the keywords DISTINCT
or ALL
:
-
DISTINCT
filters out any duplicate rows. That is, if two or more rows have the same values in every corresponding column, only one of them is included in the result set -
ALL
is the default: it returns all rows, including duplicates.ALL
is rarely used; it is allowed for compliance with the SQL standard.
A COLLATE
clause of a value-expression will not change the appearance of the column as such.
However, if the specified collation changes the case or accent sensitivity of the column, it may influence:
-
The ordering, if an
ORDER BY
clause is also present, and it involves that column -
Grouping, if the column is part of a
GROUP BY
clause -
The rows retrieved (and hence the total number of rows in the result set), if
DISTINCT
is used
Examples of SELECT
queries with different types of column lists
A simple SELECT
using only column names:
select cust_id, cust_name, phone
from customers
where city = 'London'
A query featuring a concatenation expression and a function call in the columns list:
select 'Mr./Mrs. ' || lastname, street, zip, upper(city)
from contacts
where date_last_purchase(id) = current_date
A query with two subselects:
select p.fullname,
(select name from classes c where c.id = p.class) as class,
(select name from mentors m where m.id = p.mentor) as mentor
from pupils p
The following query accomplishes the same as the previous one using joins instead of subselects:
select p.fullname,
c.name as class,
m.name as mentor
join classes c on c.id = p.class
from pupils p
join mentors m on m.id = p.mentor
This query uses a CASE
construct to determine the correct title, e.g. when sending mail to a person:
select case upper(sex)
when 'F' then 'Mrs.'
when 'M' then 'Mr.'
else ''
end as title,
lastname,
address
from employees
Query using a window function, ranks employees by salary.
SELECT
id,
salary,
name ,
DENSE_RANK() OVER (ORDER BY salary) AS EMP_RANK
FROM employees
ORDER BY salary;
Querying a selectable stored procedure:
select * from interesting_transactions(2010, 3, 'S')
order by amount
Selecting from columns of a derived table.
A derived table is a parenthesized SELECT
statement whose result set is used in an enclosing query as if it were a regular table or view.
The derived table is shown in bold here:
select fieldcount,
count(relation) as num_tables
from (select r.rdb$relation_name as relation,
count(*) as fieldcount
from rdb$relations r
join rdb$relation_fields rf
on rf.rdb$relation_name = r.rdb$relation_name
group by relation)
group by fieldcount
Asking the time through a context variable (CURRENT_TIME
):
select current_time from rdb$database
For those not familiar with RDB$DATABASE
: this is a system table that is present in all Firebird databases and is guaranteed to contain exactly one row.
Although it wasn’t created for this purpose, it has become standard practice among Firebird programmers to select from this table if you want to select “from nothing”, i.e. if you need data that are not bound to a table or view, but can be derived from the expressions in the output columns alone.
Another example is:
select power(12, 2) as twelve_squared, power(12, 3) as twelve_cubed
from rdb$database
Finally, an example where you select some meaningful information from RDB$DATABASE
itself:
select rdb$character_set_name from rdb$database
As you may have guessed, this will give you the default character set of the database.
Functions, Aggregate Functions, Window Functions, Context Variables, CASE
, Subqueries
6.1.3. The FROM
clause
The FROM
clause specifies the source(s) from which the data are to be retrieved.
In its simplest form, this is a single table or view.
However, the source can also be a selectable stored procedure, a derived table, or a common table expression.
Multiple sources can be combined using various types of joins.
This section focuses on single-source selects. Joins are discussed in a following section.
SELECT ... FROM <table-reference> [, <table-reference> ...] [...] <table-reference> ::= <table-primary> | <joined-table> <table-primary> ::= <table-or-query-name> [[AS] correlation-name] | [LATERAL] <derived-table> [<correlation-or-recognition>] | <parenthesized-joined-table> <table-or-query-name> ::= table-name | query-name | [package-name.]procedure-name [(<procedure-args>)] <procedure-args> ::= <value-expression [, <value-expression> ...] <derived-table> ::= (<query-expression>) <correlation-or-recognition> ::= [AS] correlation-name [(<column-name-list>)] <column-name-list> ::= column-name [, column-name ...]
Argument | Description |
---|---|
table-name |
Name of a table or view |
query-name |
Name of a CTE |
package-name |
Name of a package |
procedure-name |
Name of a selectable stored procedure |
procedure-args |
Selectable stored procedure arguments |
derived-table |
Derived table query expression |
correlation-name |
The alias of a data source (table, view, procedure, CTE, derived table) |
column-name |
Name or alias for a column in a relation, CTE or derived table |
Selecting FROM
a table or view
When selecting from a single table or view, the FROM
clause requires nothing more than the name.
An alias may be useful or even necessary if there are subqueries that refer to the main select statement (as they often do — subqueries like this are called correlated subqueries).
Examples
select id, name, sex, age from actors
where state = 'Ohio'
select * from birds
where type = 'flightless'
order by family, genus, species
select firstname,
middlename,
lastname,
date_of_birth,
(select name from schools s where p.school = s.id) schoolname
from pupils p
where year_started = '2012'
order by schoolname, date_of_birth
Never mix column names with column aliases!
If you specify an alias for a table or a view, you must always use this alias in place of the table name whenever you query the columns of the relation (and wherever else you make a reference to columns, such as Correct use:
Incorrect use:
|
Selecting FROM
a stored procedure
A selectable stored procedure is a procedure that:
-
contains at least one output parameter, and
-
utilizes the
SUSPEND
keyword so the caller can fetch the output rows one by one, like selecting from a table or view.
The output parameters of a selectable stored procedure correspond to the columns of a regular table.
Selecting from a stored procedure without input parameters is like selecting from a table or view:
select * from suspicious_transactions
where assignee = 'John'
Any required input parameters must be specified after the procedure name, enclosed in parentheses:
select name, az, alt from visible_stars('Brugge', current_date, '22:30')
where alt >= 20
order by az, alt
Values for optional parameters (that is, parameters for which default values have been defined) may be omitted or provided. However, if you provide them only partly, the parameters you omit must all be at the tail end.
Supposing that the procedure visible_stars
from the previous example has two optional parameters: min_magn numeric(3,1)
and spectral_class varchar(12)
, the following queries are all valid:
select name, az, alt
from visible_stars('Brugge', current_date, '22:30');
select name, az, alt
from visible_stars('Brugge', current_date, '22:30', 4.0);
select name, az, alt
from visible_stars('Brugge', current_date, '22:30', 4.0, 'G');
But this one isn’t, because there’s a “hole” in the parameter list:
select name, az, alt
from visible_stars('Brugge', current_date, '22:30', 'G');
An alias for a selectable stored procedure is specified after the parameter list:
select
number,
(select name from contestants c where c.number = gw.number)
from get_winners('#34517', 'AMS') gw
If you refer to an output parameter (“column”) by qualifying it with the full procedure name, the procedure alias should be omitted:
select
number,
(select name from contestants c where c.number = get_winners.number)
from get_winners('#34517', 'AMS')
Selecting FROM
a derived table
A derived table is a valid SELECT
statement enclosed in parentheses, optionally followed by a table alias and/or column aliases.
The result set of the statement acts as a virtual table which the enclosing statement can query.
(<query-expression>) [<correlation-or-recognition>] <correlation-or-recognition> ::= [AS] correlation-name [(<column-name-list>)] <column-name-list> ::= column-name [, column-name ...]
The SQL standard requires the <correlation-or-recognition>, and not providing one makes it hard to reference the derived table or its columns. For maximum compatibility and portability, we recommend always specifying an alias (correlation-name). |
The result set returned by this “SELECT … FROM (SELECT FROM …)
” style of statement is a virtual table that can be queried within the enclosing statement, as if it were a regular table or view.
LATERAL
Derived Tables
The keyword LATERAL
marks a table as a lateral derived table.
Lateral derived tables can reference tables (including other derived tables) that occur earlier in the FROM
clause.
See Joins with LATERAL
Derived Tables for more information.
Example using a derived table
The derived table in the query below returns the list of table names in the database, and the number of columns in each table. A “drill-down” query on the derived table returns the counts of fields and the counts of tables having each field count:
SELECT
FIELDCOUNT,
COUNT(RELATION) AS NUM_TABLES
FROM (SELECT
R.RDB$RELATION_NAME RELATION,
COUNT(*) AS FIELDCOUNT
FROM RDB$RELATIONS R
JOIN RDB$RELATION_FIELDS RF
ON RF.RDB$RELATION_NAME = R.RDB$RELATION_NAME
GROUP BY RELATION)
GROUP BY FIELDCOUNT
A trivial example demonstrating how the alias of a derived table and the list of column aliases (both optional) can be used:
SELECT
DBINFO.DESCR, DBINFO.DEF_CHARSET
FROM (SELECT *
FROM RDB$DATABASE) DBINFO
(DESCR, REL_ID, SEC_CLASS, DEF_CHARSET)
More about Derived Tables
Derived tables can
Furthermore,
|
A more useful example
Suppose we have a table COEFFS
which contains the coefficients of a number of quadratic equations we have to solve.
It has been defined like this:
create table coeffs (
a double precision not null,
b double precision not null,
c double precision not null,
constraint chk_a_not_zero check (a <> 0)
)
Depending on the values of a
, b
and c
, each equation may have zero, one or two solutions.
It is possible to find these solutions with a single-level query on table COEFFS
, but the code will look messy and several values (like the discriminant) will have to be calculated multiple times per row.
A derived table can help keep things clean here:
select
iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
iif (D > 0, (-b + sqrt(D)) / denom, null) sol_2
from
(select b, b*b - 4*a*c, 2*a from coeffs) (b, D, denom)
If we want to show the coefficients next to the solutions (which may not be a bad idea), we can alter the query like this:
select
a, b, c,
iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
iif (D > 0, (-b + sqrt(D)) / denom, null) sol_2
from
(select a, b, c, b*b - 4*a*c as D, 2*a as denom
from coeffs)
Notice that whereas the first query used a column aliases list for the derived table, the second adds aliases internally where needed. Both methods work, as long as every column is guaranteed to have a name.
All columns in the derived table will be evaluated as many times as they are specified in the main query. This is important, as it can lead to unexpected results when using non-deterministic functions. The following shows an example of this.
The result if this query produces three different values: C1 80AAECED-65CD-4C2F-90AB-5D548C3C7279 C2 C1214CD3-423C-406D-B5BD-95BF432ED3E3 C3 EB176C10-F754-4689-8B84-64B666381154 To ensure a single result of the
This query produces a single result for all three columns: C1 80AAECED-65CD-4C2F-90AB-5D548C3C7279 C2 80AAECED-65CD-4C2F-90AB-5D548C3C7279 C3 80AAECED-65CD-4C2F-90AB-5D548C3C7279 An alternative solution is to wrap the
This is an artifact of the current implementation. This behaviour may change in a future Firebird version. |
Selecting FROM
a Common Table Expression (CTE)
A common table expression — or CTE — is a more complex variant of the derived table, but it is also more powerful.
A preamble, starting with the keyword WITH
, defines one or more named CTEs, each with an optional column aliases list.
The main query, which follows the preamble, can then access these CTEs as if they were regular tables or views.
The CTEs go out of scope once the main query has run to completion.
For a full discussion of CTEs, please refer to the section Common Table Expressions (“WITH … AS … SELECT
”).
The following is a rewrite of our derived table example as a CTE:
with vars (b, D, denom) as (
select b, b*b - 4*a*c, 2*a from coeffs
)
select
iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
iif (D > 0, (-b + sqrt(D)) / denom, null) sol_2
from vars
Except for the fact that the calculations that have to be made first are now at the beginning, this isn’t a great improvement over the derived table version.
However, we can now also eliminate the double calculation of sqrt(D)
for every row:
with vars (b, D, denom) as (
select b, b*b - 4*a*c, 2*a from coeffs
),
vars2 (b, D, denom, sqrtD) as (
select b, D, denom, iif (D >= 0, sqrt(D), null) from vars
)
select
iif (D >= 0, (-b - sqrtD) / denom, null) sol_1,
iif (D > 0, (-b + sqrtD) / denom, null) sol_2
from vars2
The code is a little more complicated now, but it might execute more efficiently (depending on what takes more time: executing the SQRT
function or passing the values of b
, D
and denom
through an extra CTE).
Incidentally, we could have done the same with derived tables, but that would involve nesting.
All columns in the CTE will be evaluated as many times as they are specified in the main query. This is important, as it can lead to unexpected results when using non-deterministic functions. The following shows an example of this.
The result if this query produces three different values: C1 80AAECED-65CD-4C2F-90AB-5D548C3C7279 C2 C1214CD3-423C-406D-B5BD-95BF432ED3E3 C3 EB176C10-F754-4689-8B84-64B666381154 To ensure a single result of the
This query produces a single result for all three columns: C1 80AAECED-65CD-4C2F-90AB-5D548C3C7279 C2 80AAECED-65CD-4C2F-90AB-5D548C3C7279 C3 80AAECED-65CD-4C2F-90AB-5D548C3C7279 An alternative solution is to wrap the
This is an artifact of the current implementation. This behaviour may change in a future Firebird version. |
6.1.4. Joins
Joins combine data from two sources into a single set.
This is done on a row-by-row basis and usually involves checking a join condition to determine which rows should be merged and appear in the resulting dataset.
There are several types (INNER
, OUTER
) and classes (qualified, natural, etc.) of joins, each with its own syntax and rules.
Since joins can be chained, the datasets involved in a join may themselves be joined sets.
SELECT ... FROM <table-reference> [, <table-reference> ...] [...] <table-reference> ::= <table-primary> | <joined-table> <table-primary> ::= <table-or-query-name> [[AS] correlation-name] | [LATERAL] <derived-table> [<correlation-or-recognition>] | <parenthesized-joined-table> <table-or-query-name> ::= table-name | query-name | [package-name.]procedure-name [(<procedure-args>)] <procedure-args> ::= <value-expression [, <value-expression> ...] <correlation-or-recognition> ::= [AS] correlation-name [(<column-name-list>)] <column-name-list> ::= column-name [, column-name ...] <derived-table> ::= (<query-expression>) <parenthesized-joined-table> ::= (<parenthesized-joined-table) | (<joined-table>) <joined-table> ::= <cross-join> | <natural-join> | <qualified-join> <cross-join> <table-reference> CROSS JOIN <table-primary> <natural-join> ::= <table-reference> NATURAL [<join-type>] JOIN <table-primary> <join-type> ::= INNER | { LEFT | RIGHT | FULL } [OUTER] <qualified-join> ::= <table-reference> [<join-type>] JOIN <table-primary> { ON <search-condition> | USING (<column-name-list>) }
Argument | Description |
---|---|
table-name |
Name of a table or view |
query-name |
Name of a CTE |
package-name |
Name of a package |
procedure-name |
Name of a selectable stored procedure |
procedure-args |
Selectable stored procedure input parameter(s) |
derived-table |
Derived table query expression |
correlation-name |
The alias of a data source (table, view, procedure, CTE, derived table) |
column-name |
Name or alias for a column in a relation, CTE or derived table |
search-condition |
Join condition (criterion) |
column-name-list |
List of aliases of the columns of a derived table, or the list of columns used for an equi-join |
Inner vs. Outer Joins
A join combines data rows from two sets (usually referred to as the left set and the right set). By default, only rows that meet the join condition (i.e. that match at least one row in the other set when the join condition is applied) make it into the result set. This default type of join is called an inner join. Suppose we have the following two tables:
ID | S |
---|---|
87 |
Just some text |
235 |
Silence |
CODE | X |
---|---|
-23 |
56.7735 |
87 |
416.0 |
If we join these tables like this:
select *
from A
join B on A.id = B.code;
then the result set will be:
ID | S | CODE | X |
---|---|---|---|
87 |
Just some text |
87 |
416.0 |
The first row of A
has been joined with the second row of B
because together they met the condition “A.id = B.code
”.
The other rows from the source tables have no match in the opposite set and are therefore not included in the join.
Remember, this is an INNER
join.
We can make that fact explicit by writing:
select *
from A
inner join B on A.id = B.code;
However, since INNER
is the default, it is usually omitted.
It is perfectly possible that a row in the left set matches several rows from the right set or vice versa. In that case, all those combinations are included, and we can get results like:
ID | S | CODE | X |
---|---|---|---|
87 |
Just some text |
87 |
416.0 |
87 |
Just some text |
87 |
-1.0 |
-23 |
Don’t know |
-23 |
56.7735 |
-23 |
Still don’t know |
-23 |
56.7735 |
-23 |
I give up |
-23 |
56.7735 |
Sometimes we want (or need) all the rows of one or both of the sources to appear in the joined set, even if they don’t match a record in the other source.
This is where outer joins come in.
A LEFT
outer join includes all the records from the left set, but only matching records from the right set.
In a RIGHT
outer join it’s the other way around.
A FULL
outer joins include all the records from both sets.
In all outer joins, the “holes” (the places where an included source record doesn’t have a match in the other set) are filled up with NULL
s.
To make an outer join, you must specify LEFT
, RIGHT
or FULL
, optionally followed by the keyword OUTER
.
Below are the results of the various outer joins when applied to our original tables A
and B
:
select *
from A
left outer join B on A.id = B.code;
ID | S | CODE | X |
---|---|---|---|
87 |
Just some text |
87 |
416.0 |
235 |
Silence |
<null> |
<null> |
select *
from A
right outer join B on A.id = B.code
ID | S | CODE | X |
---|---|---|---|
<null> |
<null> |
-23 |
56.7735 |
87 |
Just some text |
87 |
416.0 |
select *
from A
full outer join B on A.id = B.code
ID | S | CODE | X |
---|---|---|---|
<null> |
<null> |
-23 |
56.7735 |
87 |
Just some text |
87 |
416.0 |
235 |
Silence |
<null> |
<null> |
Qualified joins
Qualified joins specify conditions for the combining of rows.
This happens either explicitly in an ON
clause or implicitly in a USING
clause.
<qualified-join> ::= <table-reference> [<join-type>] JOIN <table-primary> { ON <search-condition> | USING (<column-name-list>) } <join-type> ::= INNER | {LEFT | RIGHT | FULL} [OUTER]
Explicit-condition joins
Most qualified joins have an ON
clause, with an explicit condition that can be any valid Boolean expression, but usually involves a comparison between the two sources involved.
Often, the condition is an equality test (or a number of AND
ed equality tests) using the “=
” operator.
Joins like these are called equi-joins.
(The examples in the section on inner and outer joins were all equi-joins.)
Examples of joins with an explicit condition:
/* Select all Detroit customers who made a purchase
in 2013, along with the purchase details: */
select * from customers c
join sales s on s.cust_id = c.id
where c.city = 'Detroit' and s.year = 2013;
/* Same as above, but include non-buying customers: */
select * from customers c
left join sales s on s.cust_id = c.id
where c.city = 'Detroit' and s.year = 2013;
/* For each man, select the women who are taller than he.
Men for whom no such woman exists are not included. */
select m.fullname as man, f.fullname as woman
from males m
join females f on f.height > m.height;
/* Select all pupils with their class and mentor.
Pupils without a mentor are also included.
Pupils without a class are not included. */
select p.firstname, p.middlename, p.lastname,
c.name, m.name
from pupils p
join classes c on c.id = p.class
left join mentors m on m.id = p.mentor;
Named columns joins
Equi-joins often compare columns that have the same name in both tables. If this is the case, we can also use the second type of qualified join: the named columns join.
Named columns joins are not supported in Dialect 1 databases. |
Named columns joins have a USING
clause which states only the column names.
So instead of this:
select * from flotsam f
join jetsam j
on f.sea = j.sea
and f.ship = j.ship;
we can also write:
select * from flotsam
join jetsam using (sea, ship)
which is considerably shorter.
The result set is a little different though — at least when using “SELECT *
”:
-
The explicit-condition join — with the
ON
clause — will contain each of the columnsSEA
andSHIP
twice: once from tableFLOTSAM
, and once from tableJETSAM
. Obviously, they will have the same values. -
The named columns join — with the
USING
clause — will contain these columns only once.
If you want all the columns in the result set of the named columns join, set up your query like this:
select f.*, j.*
from flotsam f
join jetsam j using (sea, ship);
This will give you the same result set as the explicit-condition join.
For an OUTER
named columns join, there’s an additional twist when using “SELECT *
” or an unqualified column name from the USING
list:
If a row from one source set doesn’t have a match in the other but must still be included because of the LEFT
, RIGHT
or FULL
directive, the merged column in the joined set gets the non-NULL
value.
That is fair enough, but now you can’t tell whether this value came from the left set, the right set, or both.
This can be especially deceiving when the value came from the right hand set, because “*
” always shows combined columns in the left hand part — even in the case of a RIGHT
join.
Whether this is a problem or not depends on the situation.
If it is, use the “a.*, b.*
” approach shown above, with a
and b
the names or aliases of the two sources.
Or better yet, avoid “*
” altogether in your serious queries and qualify all column names in joined sets.
This has the additional benefit that it forces you to think about which data you want to retrieve and where from.
It is your responsibility to make sure the column names in the USING
list are of compatible types between the two sources.
If the types are compatible but not equal, the engine converts them to the type with the broadest range of values before comparing the values.
This will also be the data type of the merged column that shows up in the result set if “SELECT *
” or the unqualified column name is used.
Qualified columns on the other hand will always retain their original data type.
If, when joining by named columns, you are using a join column in the
However:
The fact is, the unspecified column in this case is implicitly replaced by `COALESCE(a.x, b.x). This trick is used to disambiguate column names, but it also interferes with the use of the index. |
Natural joins
Taking the idea of the named columns join a step further, a natural join performs an automatic equi-join on all the columns that have the same name in the left and right table. The data types of these columns must be compatible.
Natural joins are not supported in Dialect 1 databases. |
<natural-join> ::= <table-reference> NATURAL [<join-type>] JOIN <table-primary> <join-type> ::= INNER | {LEFT | RIGHT | FULL} [OUTER]
Given these two tables:
create table TA (
a bigint,
s varchar(12),
ins_date date
);
create table TB (
a bigint,
descr varchar(12),
x float,
ins_date date
);
A natural join on TA
and TB
would involve the columns a
and ins_date
, and the following two statements would have the same effect:
select * from TA
natural join TB;
select * from TA
join TB using (a, ins_date);
Like all joins, natural joins are inner joins by default, but you can turn them into outer joins by specifying LEFT
, RIGHT
or FULL
before the JOIN
keyword.
If there are no columns with the same name in the two source relations, a |
Cross joins
A cross join produces the full set product — or Cartesian product — of the two data sources. This means that it successfully matches every row in the left source to every row in the right source.
FROM <table-reference> [, <table-reference> ...] <cross-join> <table-reference> CROSS JOIN <table-primary>
Use of the comma syntax is discouraged, and we recommend using the explicit join syntax.
Cross-joining two sets is equivalent to joining them on a tautology (a condition that is always true). The following two statements have the same effect:
select * from TA
cross join TB;
select * from TA
join TB on TRUE;
Cross joins are inner joins, because they only include matching records –- it just so happens that every record matches! An outer cross join, if it existed, wouldn’t add anything to the result, because what outer joins add are non-matching records, and these don’t exist in cross joins.
Cross joins are seldom useful, except if you want to list all the possible combinations of two or more variables. Suppose you are selling a product that comes in different sizes, different colors and different materials. If these variables are each listed in a table of their own, this query would return all the combinations:
select m.name, s.size, c.name
from materials m
cross join sizes s
cross join colors c;
Implicit Joins
In the SQL:89 standard, the tables involved in a join were specified as a comma-delimited list in the FROM
clause (in other words, a cross join).
The join conditions were then specified in the WHERE
clause among other search terms.
This type of join is called an implicit join.
An example of an implicit join:
/*
* A sample of all Detroit customers who
* made a purchase.
*/
SELECT *
FROM customers c, sales s
WHERE s.cust_id = c.id AND c.city = 'Detroit'
Mixing Explicit and Implicit Joins
Mixing explicit and implicit joins is not recommend, but is allowed. However, some types of mixing are not supported by Firebird.
For example, the following query will raise the error “Column does not belong to referenced table”
SELECT *
FROM TA, TB
JOIN TC ON TA.COL1 = TC.COL1
WHERE TA.COL2 = TB.COL2
That is because the explicit join cannot see the TA
table.
However, the next query will complete without error, since the restriction is not violated.
SELECT *
FROM TA, TB
JOIN TC ON TB.COL1 = TC.COL1
WHERE TA.COL2 = TB.COL2
A Note on Equality
This note about equality and inequality operators applies everywhere in Firebird’s SQL language, not only in |
The “=
” operator, which is explicitly used in many conditional joins and implicitly in named column joins and natural joins, only matches values to values.
According to the SQL standard, NULL
is not a value and hence two NULL
s are neither equal nor unequal to one another.
If you need NULL
s to match each other in a join, use the IS NOT DISTINCT FROM
operator.
This operator returns true if the operands have the same value or if they are both NULL
.
select *
from A join B
on A.id is not distinct from B.code;
Likewise, when you want to join on inequality, use IS DISTINCT FROM
, not “<>
”, if you want NULL
to be considered different from any value and two NULL
s considered equal:
select *
from A join B
on A.id is distinct from B.code;
Ambiguous field names in joins
Firebird rejects unqualified field names in a query if these field names exist in more than one dataset involved in a join.
This is even true for inner equi-joins where the field name figures in the ON
clause like this:
select a, b, c
from TA
join TB on TA.a = TB.a;
There is one exception to this rule: with named columns joins and natural joins, the unqualified field name of a column taking part in the matching process may be used legally and refers to the merged column of the same name.
For named columns joins, these are the columns listed in the USING
clause.
For natural joins, they are the columns that have the same name in both relations.
But please notice again that, especially in outer joins, plain colname
isn’t always the same as left.colname
or right.colname
.
Types may differ, and one of the qualified columns may be NULL
while the other isn’t.
In that case, the value in the merged, unqualified column may mask the fact that one of the source values is absent.
Joins with LATERAL
Derived Tables
A derived table defined with the LATERAL
keyword is called a lateral derived table.
If a derived table is defined as lateral, then it is allowed to refer to other tables in the same FROM
clause, but only those declared before it in the FROM
clause.
/* select customers with their last order date and number */
select c.name, ox.order_date as last_order, ox.number
from customer c
left join LATERAL (
select first 1 o.order_date, o.number
from orders o
where o.id_customer = c.id
order by o.ORDER_DATE desc
) as ox on true
--
select dt.population, dt.city_name, c.country_name
from (select distinct country_name from cities) AS c
cross join LATERAL (
select first 1 city_name, population
from cities
where cities.country_name = c.country_name
order by population desc
) AS dt;
--
select salespeople.name,
max_sale.amount,
customer_of_max_sale.customer_name
from salespeople,
LATERAL ( select max(amount) as amount
from all_sales
where all_sales.salesperson_id = salespeople.id
) as max_sale,
LATERAL ( select customer_name
from all_sales
where all_sales.salesperson_id = salespeople.id
and all_sales.amount = max_sale.amount
) as customer_of_max_sale;
6.1.5. The WHERE
clause
The WHERE
clause serves to limit the rows returned to the ones that the caller is interested in.
The condition following the keyword WHERE
can be as simple as a check like “AMOUNT = 3
” or it can be a multilayered, convoluted expression containing subselects, predicates, function calls, mathematical and logical operators, context variables and more.
The condition in the WHERE
clause is often called the search condition, the search expression or simply the search.
In DSQL and ESQL, the search condition may contain parameters.
This is useful if a query has to be repeated a number of times with different input values.
In the SQL string as it is passed to the server, question marks are used as placeholders for the parameters.
These question marks are called positional parameters because they can only be told apart by their position in the string.
Connectivity libraries often support named parameters of the form :id
, :amount
, :a
etc.
These are more user-friendly;
the library takes care of translating the named parameters to positional parameters before passing the statement to the server.
The search condition may also contain local (PSQL) or host (ESQL) variable names, preceded by a colon.
SELECT ... FROM ... [...] WHERE <search-condition> [...]
Parameter | Description |
---|---|
search-condition |
A Boolean expression returning |
Only those rows for which the search condition evaluates to TRUE
are included in the result set.
Be careful with possible NULL
outcomes: if you negate a NULL
expression with NOT
, the result will still be NULL
and the row will not pass.
This is demonstrated in one of the examples below.
Examples
select genus, species from mammals
where family = 'Felidae'
order by genus;
select * from persons
where birthyear in (1880, 1881)
or birthyear between 1891 and 1898;
select name, street, borough, phone
from schools s
where exists (select * from pupils p where p.school = s.id)
order by borough, street;
select * from employees
where salary >= 10000 and position <> 'Manager';
select name from wrestlers
where region = 'Europe'
and weight > all (select weight from shot_putters
where region = 'Africa');
select id, name from players
where team_id = (select id from teams where name = 'Buffaloes');
select sum (population) from towns
where name like '%dam'
and province containing 'land';
select password from usertable
where username = current_user;
The following example shows what can happen if the search condition evaluates to NULL
.
Suppose you have a table listing children’s names and the number of marbles they possess. At a certain moment, the table contains this data:
CHILD | MARBLES |
---|---|
Anita |
23 |
Bob E. |
12 |
Chris |
<null> |
Deirdre |
1 |
Eve |
17 |
Fritz |
0 |
Gerry |
21 |
Hadassah |
<null> |
Isaac |
6 |
First, please notice the difference between NULL
and 0: Fritz is known to have no marbles at all, Chris’s and Hadassah’s marble counts are unknown.
Now, if you issue this SQL statement:
select list(child) from marbletable where marbles > 10;
you will get the names Anita, Bob E., Eve and Gerry. These children all have more than 10 marbles.
If you negate the expression:
select list(child) from marbletable where not marbles > 10
it’s the turn of Deirdre, Fritz and Isaac to fill the list. Chris and Hadassah are not included, because they aren’t known to have ten or fewer marbles. Should you change that last query to:
select list(child) from marbletable where marbles <= 10;
the result will still be the same, because the expression NULL <= 10
yields UNKNOWN
.
This is not the same as TRUE
, so Chris and Hadassah are not listed.
If you want them listed with the “poor” children, change the query to:
select list(child) from marbletable
where marbles <= 10 or marbles is null;
Now the search condition becomes true for Chris and Hadassah, because “marbles is null
” obviously returns TRUE
in their case.
In fact, the search condition cannot be NULL
for anybody now.
Lastly, two examples of SELECT
queries with parameters in the search.
It depends on the application how you should define query parameters and even if it is possible at all.
Notice that queries like these cannot be executed immediately: they have to be prepared first.
Once a parameterized query has been prepared, the user (or calling code) can supply values for the parameters and have it executed many times, entering new values before every call.
How the values are entered and the execution started is up to the application.
In a GUI environment, the user typically types the parameter values in one or more text boxes and then clicks an “Execute”, “Run” or “Refresh” button.
select name, address, phone frome stores
where city = ? and class = ?;
select * from pants
where model = :model and size = :size and color = :col;
The last query cannot be passed directly to the engine; the application must convert it to the other format first, mapping named parameters to positional parameters.
6.1.6. The GROUP BY
clause
GROUP BY
merges output rows that have the same combination of values in its item list into a single row.
Aggregate functions in the select list are applied to each group individually instead of to the dataset as a whole.
If the select list only contains aggregate columns or, more generally, columns whose values don’t depend on individual rows in the underlying set, GROUP BY
is optional.
When omitted, the final result set consists of a single row (provided that at least one aggregated column is present).
If the select list contains both aggregate columns and columns whose values may vary per row, the GROUP BY
clause becomes mandatory.
SELECT ... FROM ... GROUP BY <grouping-item> [, <grouping-item> ...] [HAVING <grouped-row-condition>] ... <grouping-item> ::= <non-aggr-select-item> | <non-aggr-expression> <non-aggr-select-item> ::= column-copy | column-alias | column-position
Argument | Description |
---|---|
<grouping-item> |
Expression to group on;
in the rest of this chapter, we use <value-expression> in |
non-aggr-expression |
Any non-aggregating expression that is not included in the |
column-copy |
A literal copy, from the |
column-alias |
The alias, from the |
column-position |
The position number, in the |
A general rule of thumb is that every non-aggregate item in the SELECT
list must also be in the GROUP BY
list.
You can do this in three ways:
-
By copying the item verbatim from the select list, e.g. “
class
” or “'D:' || upper(doccode)
”. -
By specifying the column alias, if it exists.
-
By specifying the column position as an integer literal between 1 and the number of columns. Integer values resulting from expressions or parameter substitutions are simple constant values and not column position and will be used as such in the grouping. They will have no effect though, as their value is the same for each row.
If you group by a column position, the expression at that position is copied internally from the select list. If it concerns a subquery, that subquery will be executed again in the grouping phase. That is to say, grouping by the column position, rather than duplicating the subquery expression in the grouping clause, saves keystrokes and bytes, but it is not a way of saving processing cycles! |
In addition to the required items, the grouping list may also contain:
-
Columns from the source table that are not in the select list, or non-aggregate expressions based on such columns. Adding such columns may further subdivide the groups. However, since these columns are not in the select list, you can’t tell which aggregated row corresponds to which value in the column. So, in general, if you are interested in this information, you also include the column or expression in the select list — which brings you back to the rule: “every non-aggregate column in the select list must also be in the grouping list”.
-
Expressions that aren’t dependent on the data in the underlying set, e.g. constants, context variables, single-value non-correlated subselects etc. This is only mentioned for completeness, as adding such items is utterly pointless: they don’t affect the grouping at all. “Harmless but useless” items like these may also figure in the select list without being copied to the grouping list.
Examples
When the select list contains only aggregate columns, GROUP BY
is not mandatory:
select count(*), avg(age) from students
where sex = 'M';
This will return a single row listing the number of male students and their average age.
Adding expressions that don’t depend on values in individual rows of table STUDENTS
doesn’t change that:
select count(*), avg(age), current_date from students
where sex = 'M';
The row will now have an extra column showing the current date, but other than that, nothing fundamental has changed.
A GROUP BY
clause is still not required.
However, in both the above examples it is allowed. This is perfectly valid:
select count(*), avg(age) from students
where sex = 'M'
group by class;
This will return a row for each class that has boys in it, listing the number of boys and their average age in that particular class.
(If you also leave the current_date
field in, this value will be repeated on every row, which is not very exciting.)
The above query has a major drawback though: it gives you information about the different classes, but it doesn’t tell you which row applies to which class.
To get that extra bit of information, add the non-aggregate column CLASS
to the select list:
select class, count(*), avg(age) from students
where sex = 'M'
group by class;
Now we have a useful query.
Notice that the addition of column CLASS
also makes the GROUP BY
clause mandatory.
We can’t drop that clause anymore, unless we also remove CLASS
from the column list.
The output of our last query may look something like this:
CLASS | COUNT | AVG |
---|---|---|
2A |
12 |
13.5 |
2B |
9 |
13.9 |
3A |
11 |
14.6 |
3B |
12 |
14.4 |
… |
… |
… |
The headings “COUNT” and “AVG” are not very informative. In a simple case like this, you might get away with that, but in general you should give aggregate columns a meaningful name by aliasing them:
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class;
Adding more non-aggregate (or, row-dependent) columns requires adding them to the GROUP BY
clause too.
For instance, you might want to see the above information for girls as well;
and you may also want to differentiate between boarding and day students:
select class,
sex,
boarding_type,
count(*) as number,
avg(age) as avg_age
from students
group by class, sex, boarding_type;
This may give you the following result:
CLASS | SEX | BOARDING_TYPE | NUMBER | AVG_AGE |
---|---|---|---|---|
2A |
F |
BOARDING |
9 |
13.3 |
2A |
F |
DAY |
6 |
13.5 |
2A |
M |
BOARDING |
7 |
13.6 |
2A |
M |
DAY |
5 |
13.4 |
2B |
F |
BOARDING |
11 |
13.7 |
2B |
F |
DAY |
5 |
13.7 |
2B |
M |
BOARDING |
6 |
13.8 |
… |
… |
… |
… |
… |
Each row in the result set corresponds to one particular combination of the columns CLASS
, SEX
and BOARDING_TYPE
.
The aggregate results — number and average age — are given for each of these groups individually.
In a query like this, you don’t see a total for boys as a whole, or day students as a whole.
That’s the tradeoff: the more non-aggregate columns you add, the more you can pinpoint specific groups, but the more you also lose sight of the general picture.
Of course, you can still obtain the “coarser” aggregates through separate queries.
HAVING
Just as a WHERE
clause limits the rows in a dataset to those that meet the search condition, so the HAVING
sub-clause imposes restrictions on the aggregated rows in a grouped set.
HAVING
is optional, and can only be used in conjunction with GROUP BY
.
The condition(s) in the HAVING
clause can refer to:
-
Any aggregated column in the select list. This is the most widely used case.
-
Any aggregated expression that is not in the select list, but allowed in the context of the query. This is sometimes useful too.
-
Any column in the
GROUP BY
list. While legal, it is more efficient to filter on these non-aggregated data at an earlier stage: in theWHERE
clause. -
Any expression whose value doesn’t depend on the contents of the dataset (like a constant or a context variable). This is valid but not useful, because it will either suppress the entire set or leave it untouched, based on conditions that have nothing to do with the set itself.
A HAVING
clause can not contain:
-
Non-aggregated column expressions that are not in the
GROUP BY
list. -
Column positions. An integer in the
HAVING
clause is just an integer, not a column position. -
Column aliases –- not even if they appear in the
GROUP BY
clause!
Examples
Building on our earlier examples, this could be used to skip small groups of students:
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
having count(*) >= 5;
To select only groups that have a minimum age spread:
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
having max(age) - min(age) > 1.2;
Notice that if you’re interested in this information, you’ll likely also include min(age)
and max(age)
— or the expression “max(age) - min(age)
”.
To include only 3rd classes:
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
having class starting with '3';
Better would be to move this condition to the WHERE
clause:
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M' and class starting with '3'
group by class;
6.1.7. The WINDOW
Clause
The WINDOW
clause defines one or more named windows that can be referenced by window functions in the current query specification.
<query_spec> ::= SELECT [<limit_clause>] [<distinct_clause>] <select_list> <from_clause> [<where_clause>] [<group_clause>] [<having_clause>] [<named_windows_clause>] [<plan_clause>] <named_windows_clause> ::= WINDOW <window_definition> [, <window_definition> ...] <window definition> ::= new-window-name AS (<window-specification-details>) <window-specification-details> ::= !! See Window (Analytical) Functions !!
In a query with multiple SELECT
and WINDOW
clauses (for example, with subqueries), the scope of the `new_window_name_ is confined to its query context.
That means a window name from an inner context cannot be used in an outer context, nor vice versa.
However, the same window name can be used independently in different contexts, though to avoid confusion it might be better to avoid this.
For more information, see Window (Analytical) Functions.
Example Using Named Windows
select
id,
department,
salary,
count(*) over w1,
first_value(salary) over w2,
last_value(salary) over w2
from employee
window w1 as (partition by department),
w2 as (w1 order by salary)
order by department, salary;
6.1.8. The PLAN
Clause
The PLAN
clause enables the user to submit a data retrieval plan, thus overriding the plan that the optimizer would have generated automatically.
PLAN <plan-expression> <plan-expression> ::= (<plan-item> [, <plan-item> ...]) | <sorted-item> | <joined-item> | <merged-item> | <hash-item> <sorted-item> ::= SORT (<plan-item>) <joined-item> ::= JOIN (<plan-item>, <plan-item> [, <plan-item> ...]) <merged-item> ::= [SORT] MERGE (<sorted-item>, <sorted-item> [, <sorted-item> ...]) <hash-item> ::= HASH (<plan-item>, <plan-item> [, <plan-item> ...]) <plan-item> ::= <basic-item> | <plan-expression> <basic-item> ::= <relation> { NATURAL | INDEX (<indexlist>) | ORDER index [INDEX (<indexlist>)] } <relation> ::= table | view [table] <indexlist> ::= index [, index ...]
Argument | Description |
---|---|
table |
Table name or its alias |
view |
View name |
index |
Index name |
Every time a user submits a query to the Firebird engine, the optimizer computes a data retrieval strategy.
Most Firebird clients can make this retrieval plan visible to the user.
In Firebird’s own isql
utility, this is done with the command SET PLAN ON
.
If you are only interested in looking at query plans, SET PLANONLY ON
will show the plan without executing the query.
Use SET PLANONLY OFF
to execute the query and show the plan.
A more detailed plan can be obtained when you enable an advanced plan.
In isql this can be done with SET EXPLAIN ON
.
The advanced plan displays more detailed information about the access methods used by the optimizer, however it cannot be included in the PLAN
clause of a statement.
The description of the advanced plan is beyond the scope of this Language Reference.
In most situations, you can trust that Firebird will select the optimal query plan for you. However, if you have complicated queries that seem to be underperforming, it may be worth your while to examine the plan and see if you can improve on it.
Simple Plans
The simplest plans consist of a relation name followed by a retrieval method.
For example, for an unsorted single-table select without a WHERE
clause:
select * from students
plan (students natural);
Advanced plan:
Select Expression -> Table "STUDENTS" Full Scan
If there’s a WHERE
or a HAVING
clause, you can specify the index to be used for finding matches:
select * from students
where class = '3C'
plan (students index (ix_stud_class));
Advanced plan:
Select Expression -> Filter -> Table "STUDENTS" Access By ID -> Bitmap -> Index "IX_STUD_CLASS" Range Scan (full match)
The INDEX
directive is also used for join conditions (to be discussed a little later).
It can contain a list of indexes, separated by commas.
ORDER
specifies the index for sorting the set if an ORDER BY
or GROUP BY
clause is present:
select * from students
plan (students order pk_students)
order by id;
Advanced plan:
Select Expression -> Table "STUDENTS" Access By ID -> Index "PK_STUDENTS" Full Scan
ORDER
and INDEX
can be combined:
select * from students
where class >= '3'
plan (students order pk_students index (ix_stud_class))
order by id;
Advanced plan:
Select Expression -> Filter -> Table "STUDENTS" Access By ID -> Index "PK_STUDENTS" Full Scan -> Bitmap -> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)
It is perfectly OK if ORDER
and INDEX
specify the same index:
select * from students
where class >= '3'
plan (students order ix_stud_class index (ix_stud_class))
order by class;
Advanced plan:
Select Expression -> Filter -> Table "STUDENTS" Access By ID -> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1) -> Bitmap -> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)
For sorting sets when there’s no usable index available (or if you want to suppress its use), leave out ORDER
and prepend the plan expression with SORT
:
select * from students
plan sort (students natural)
order by name;
Advanced plan:
Select Expression -> Sort (record length: 128, key length: 56) -> Table "STUDENTS" Full Scan
Or when an index is used for the search:
select * from students
where class >= '3'
plan sort (students index (ix_stud_class))
order by name;
Advanced plan:
elect Expression -> Sort (record length: 136, key length: 56) -> Filter -> Table "STUDENTS" Access By ID -> Bitmap -> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)
Notice that SORT
, unlike ORDER
, is outside the parentheses.
This reflects the fact that the data rows are retrieved unordered and sorted afterward by the engine.
When selecting from a view, specify the view and the table involved.
For instance, if you have a view FRESHMEN
that selects the first-year students:
select * from freshmen
plan (freshmen students natural);
Advanced plan:
Select Expression -> Table "STUDENTS" as "FRESHMEN" Full Scan
Or, for instance:
select * from freshmen
where id > 10
plan sort (freshmen students index (pk_students))
order by name desc;
Advanced plan:
Select Expression -> Sort (record length: 144, key length: 24) -> Filter -> Table "STUDENTS" as "FRESHMEN" Access By ID -> Bitmap -> Index "PK_STUDENTS" Range Scan (lower bound: 1/1)
If a table or view has been aliased, it is the alias, not the original name, that must be used in the |
Composite Plans
When a join is made, you can specify the index which is to be used for matching.
You must also use the JOIN
directive on the two streams in the plan:
select s.id, s.name, s.class, c.mentor
from students s
join classes c on c.name = s.class
plan join (s natural, c index (pk_classes));
Advanced plan:
Select Expression -> Nested Loop Join (inner) -> Table "STUDENTS" as "S" Full Scan -> Filter -> Table "CLASSES" as "C" Access By ID -> Bitmap -> Index "PK_CLASSES" Unique Scan
The same join, sorted on an indexed column:
select s.id, s.name, s.class, c.mentor
from students s
join classes c on c.name = s.class
plan join (s order pk_students, c index (pk_classes))
order by s.id;
Advanced plan:
Select Expression -> Nested Loop Join (inner) -> Table "STUDENTS" as "S" Access By ID -> Index "PK_STUDENTS" Full Scan -> Filter -> Table "CLASSES" as "C" Access By ID -> Bitmap -> Index "PK_CLASSES" Unique Scan
And on a non-indexed column:
select s.id, s.name, s.class, c.mentor
from students s
join classes c on c.name = s.class
plan sort (join (s natural, c index (pk_classes)))
order by s.name;
Advanced plan:
Select Expression -> Sort (record length: 152, key length: 12) -> Nested Loop Join (inner) -> Table "STUDENTS" as "S" Full Scan -> Filter -> Table "CLASSES" as "C" Access By ID -> Bitmap -> Index "PK_CLASSES" Unique Scan
With a search condition added:
select s.id, s.name, s.class, c.mentor
from students s
join classes c on c.name = s.class
where s.class <= '2'
plan sort (join (s index (fk_student_class), c index (pk_classes)))
order by s.name;
Advanced plan:
Select Expression -> Sort (record length: 152, key length: 12) -> Nested Loop Join (inner) -> Filter -> Table "STUDENTS" as "S" Access By ID -> Bitmap -> Index "FK_STUDENT_CLASS" Range Scan (lower bound: 1/1) -> Filter -> Table "CLASSES" as "C" Access By ID -> Bitmap -> Index "PK_CLASSES" Unique Scan
As a left outer join:
select s.id, s.name, s.class, c.mentor
from classes c
left join students s on c.name = s.class
where s.class <= '2'
plan sort (join (c natural, s index (fk_student_class)))
order by s.name;
Advanced plan:
Select Expression -> Sort (record length: 192, key length: 56) -> Filter -> Nested Loop Join (outer) -> Table "CLASSES" as "C" Full Scan -> Filter -> Table "STUDENTS" as "S" Access By ID -> Bitmap -> Index "FK_STUDENT_CLASS" Range Scan (full match)
If there are no indices available to match the join condition (or if you don’t want to use it), then it is possible connect the streams using HASH
or MERGE
method.
To connect using the HASH
method in the plan, the HASH
directive is used instead of the JOIN
directive.
In this case, the smaller (secondary) stream is materialized completely into an internal buffer.
While reading this secondary stream, a hash function is applied and a pair {hash, pointer to buffer} is written to a hash table.
Then the primary stream is read and its hash key is tested against the hash table.
select *
from students s
join classes c on c.cookie = s.cookie
plan hash (c natural, s natural)
Advanced plan:
Select Expression -> Filter -> Hash Join (inner) -> Table "STUDENTS" as "S" Full Scan -> Record Buffer (record length: 145) -> Table "CLASSES" as "C" Full Scan
For a MERGE
join, the plan must first sort both streams on their join column(s) and then merge.
This is achieved with the SORT
directive (which we’ve already seen) and MERGE
instead of JOIN
:
select * from students s
join classes c on c.cookie = s.cookie
plan merge (sort (c natural), sort (s natural));
Adding an ORDER BY
clause means the result of the merge must also be sorted:
select * from students s
join classes c on c.cookie = s.cookie
plan sort (merge (sort (c natural), sort (s natural)))
order by c.name, s.id;
Finally, we add a search condition on two indexable columns of table STUDENTS
:
select * from students s
join classes c on c.cookie = s.cookie
where s.id < 10 and s.class <= '2'
plan sort (merge (sort (c natural),
sort (s index (pk_students, fk_student_class))))
order by c.name, s.id;
As follows from the formal syntax definition, JOIN
s and MERGE
s in the plan may combine more than two streams.
Also, every plan expression may be used as a plan item in an encompassing plan.
This means that plans of certain complicated queries may have various nesting levels.
Finally, instead of MERGE
you may also write SORT MERGE
.
As this makes no difference and may create confusion with “real” SORT
directives (the ones that do make a difference), it’s best to stick to plain MERGE
.
In addition to the plan for the main query, you can specify a plan for each subquery. For example, the following query with multiple plans will work:
select *
from color
where exists (
select *
from hors
where horse.code_color = color.code_color
plan (horse index (fk_horse_color)))
plan (color natural)
Occasionally, the optimizer will accept a plan and then not follow it, even though it does not reject it as invalid. One such example was
It is advisable to treat such as plan as “deprecated”. |
6.1.9. UNION
The UNION
clause concatenates two or more datasets, thus increasing the number of rows but not the number of columns.
Datasets taking part in a UNION
must have the same number of columns, and columns at corresponding positions must be of the same type.
By default, a union suppresses duplicate rows.
UNION ALL
shows all rows, including any duplicates.
The optional DISTINCT
keyword makes the default behaviour explicit.
<query-expression> ::= [<with-clause>] <query-expression-body> [<order-by-clause>] [{ <rows-clause> | [<result-offset-clause>] [<fetch-first-clause>] }] <query-expression-body> ::= <query-term> | <query-expression-body> UNION [{ DISTINCT | ALL }] <query-term> <query-term> ::= <query-primary> <query-primary> ::= <query-specification> | (<query-expression-body> [<order-by-clause>] [<result-offset-clause>] [<fetch-first-clause>]) <query-specification> ::= SELECT <limit-clause> [{ ALL | DISTINCT }] <select-list> FROM <table-reference> [, <table-reference> ...] [WHERE <search-condition>] [GROUP BY <value-expression> [, <value-expression> ...]] [HAVING <search-condition>] [WINDOW <window-definition> [, <window-definition> ...]] [PLAN <plan-expression>]
See also Full SELECT
Syntax for the full syntax.
Unions take their column names from the first select query.
If you want to alias union columns, do so in the column list of the topmost SELECT
.
Aliases in other participating selects are allowed and may even be useful, but will not propagate to the union level.
If a union has an ORDER BY
clause, the only allowed sort items are integer literals indicating 1-based column positions, optionally followed by an ASC
/DESC
and/or a NULLS {FIRST | LAST}
directive.
This also implies that you cannot order a union by anything that isn’t a column in the union.
(You can, however, wrap it in a derived table, which gives you back all the usual sort options.)
Unions are allowed in subqueries of any kind and can themselves contain subqueries. They can also contain joins, and can take part in a join when wrapped in a derived table.
Examples
This query presents information from different music collections in one dataset using unions:
select id, title, artist, length, 'CD' as medium
from cds
union
select id, title, artist, length, 'LP'
from records
union
select id, title, artist, length, 'MC'
from cassettes
order by 3, 2 -- artist, title;
If id
, title
, artist
and length
are the only fields in the tables involved, the query can also be written as:
select c.*, 'CD' as medium
from cds c
union
select r.*, 'LP'
from records r
union
select c.*, 'MC'
from cassettes c
order by 3, 2 -- artist, title;
Qualifying the “stars” is necessary here because they are not the only item in the column list. Notice how the “c” aliases in the first and third select do not conflict with each other: their scopes are not union-wide but apply only to their respective select queries.
The next query retrieves names and phone numbers from translators and proofreaders.
Translators who also work as proofreaders will show up only once in the result set, provided their phone number is the same in both tables.
The same result can be obtained without DISTINCT
.
With ALL
, these people would appear twice.
select name, phone from translators
union distinct
select name, telephone from proofreaders;
A UNION
within a subquery:
select name, phone, hourly_rate from clowns
where hourly_rate < all
(select hourly_rate from jugglers
union
select hourly_rate from acrobats)
order by hourly_rate;
Using parenthesized query expressions to show the employees with the highest and lowest salaries:
(
select emp_no, salary, 'lowest' as type
from employee
order by salary asc
fetch first row only
)
union all
(
select emp_no, salary, 'highest' as type
from employee
order by salary desc
fetch first row only
);
6.1.10. ORDER BY
When a SELECT
statement is executed, the result set is not sorted in any way.
It often happens that rows appear to be sorted chronologically, simply because they are returned in the same order they were added to the table by INSERT
statements.
This is not something you should rely on: the order may change depending on the plan or updates to rows, etc.
To specify an explicit sorting order for the set specification, an ORDER BY
clause is used.
SELECT ... FROM ... ... ORDER BY <sort-specification [, <sort-specification> ...] <sort-specification> ::= <value-expression> [<ordering-specification>] [<null-ordering>] <ordering-specification> ::= ASC | ASCENDING | DESC | DESCENDING <null-ordering> ::= NULLS FIRST | NULLS LAST
Argument | Description |
---|---|
value-expression |
Any expression; an expression that is only an integer literal represents the column position |
The ORDER BY
consists of a comma-separated list of the columns or expressions on which the result data set should be sorted.
The sort order can be specified by the name of the column — but only if the column was not previously aliased in the SELECT
columns list.
The alias must be used if it was used in the select list.
The ordinal position number of the column in the SELECT
column list, the alias given to the column in the SELECT
list with the help of the AS
keyword, or the number of the column in the SELECT
list can be used without restriction.
The three forms of expressing the columns for the sort order can be mixed in the same ORDER BY
clause.
For instance, one column in the list can be specified by its name and another column can be specified by its number.
If you sort by column position or alias, then the expression corresponding to this position (alias) will be copied from the |
If you use the column position to specify the sort order for a query of the |
Sorting Direction
The keyword ASC
— short for ASCENDING
— specifies a sort direction from lowest to highest.
ASC
is the default sort direction.
The keyword DESC
— short for DESCENDING
— specifies a sort direction from highest to lowest.
Specifying ascending order for one column and descending order for another is allowed.
Collation Order
Using the keyword COLLATE
in a <value-expression> specifies the collation order to apply for a string column if you need a collation order that is different from the normal collation for this column.
The normal collation order is defined by either the default collation for the database character set, or the collation set explicitly in the column’s definition.
NULLs Position
The keyword NULLS
defines where NULL in the associated column will fall in the sort order: NULLS FIRST
places the rows with the NULL
column above rows ordered by that column’s value;
NULLS LAST
places those rows after the ordered rows.
NULLS FIRST
is the default.
Ordering UNION
-ed Sets
Not-parenthesized query expressions contributing to a UNION
cannot take an ORDER BY
clause.
You can order the entire output, using one ORDER BY
clause at the end of the overall query, or use parenthesized query expressions, which do allow ORDER BY
.
The simplest — and, in some cases, the only — method for specifying the sort order is by the ordinal column position. However, it is also valid to use the column names or aliases, from the first contributing query only.
The ASC
/DESC
and/or NULLS
directives are available for this global set.
If discrete ordering within the contributing set is required, use parenthesized query expressions, derived tables, or common table expressions for those sets.
Examples of ORDER BY
Sorting the result set in ascending order, ordering by the RDB$CHARACTER_SET_ID
and RDB$COLLATION_ID
columns of the RDB$COLLATIONS
table:
SELECT
RDB$CHARACTER_SET_ID AS CHARSET_ID,
RDB$COLLATION_ID AS COLL_ID,
RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY RDB$CHARACTER_SET_ID, RDB$COLLATION_ID;
The same, but sorting by the column aliases:
SELECT
RDB$CHARACTER_SET_ID AS CHARSET_ID,
RDB$COLLATION_ID AS COLL_ID,
RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY CHARSET_ID, COLL_ID;
Sorting the output data by the column position numbers:
SELECT
RDB$CHARACTER_SET_ID AS CHARSET_ID,
RDB$COLLATION_ID AS COLL_ID,
RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY 1, 2;
Sorting a SELECT *
query by position numbers — possible, but nasty and not recommended:
SELECT *
FROM RDB$COLLATIONS
ORDER BY 3, 2;
Sorting by the second column in the BOOKS
table, or — if BOOKS
has only one column — the FILMS.DIRECTOR
column:
SELECT
BOOKS.*,
FILMS.DIRECTOR
FROM BOOKS, FILMS
ORDER BY 2;
Sorting in descending order by the values of column PROCESS_TIME
, with NULL
s placed at the beginning of the set:
SELECT *
FROM MSG
ORDER BY PROCESS_TIME DESC NULLS FIRST;
Sorting the set obtained by a UNION
of two queries.
Results are sorted in descending order for the values in the second column, with NULL
s at the end of the set;
and in ascending order for the values of the first column with NULL
s at the beginning.
SELECT
DOC_NUMBER, DOC_DATE
FROM PAYORDER
UNION ALL
SELECT
DOC_NUMBER, DOC_DATE
FROM BUDGORDER
ORDER BY 2 DESC NULLS LAST, 1 ASC NULLS FIRST;
6.1.11. ROWS
Retrieves a slice of rows from an ordered set
SELECT <columns> FROM ... [WHERE ...] [ORDER BY ...] ROWS <value-expression> [TO <value-expression>]
Argument | Description |
---|---|
value-expression |
Any integer expressions |
ROWS is non-standard syntax
|
ROWS
limits the amount of rows returned by the SELECT
statement to a specified number or range.
The ROWS
clause also does the same job as the FIRST
and SKIP
clauses, but neither are SQL-compliant.
Unlike FIRST
and SKIP
, and OFFSET
and FETCH
, the ROWS
and TO
clauses accept any type of integer expression as their arguments, without parentheses.
Of course, parentheses may still be needed for nested evaluations inside the expression, and a subquery must always be enclosed in parentheses.
|
Characteristics of Using ROWS m
Without a TO
Clause:
Calling ROWS m
retrieves the first m records from the set specified.
-
If m is greater than the total number of records in the intermediate data set, the entire set is returned
-
If m = 0, an empty set is returned
-
If m < 0, the
SELECT
statement call fails with an error
Characteristics of Using ROWS m
With a TO
Clause:
Calling ROWS m TO n
retrieves the rows from the set, starting at row m and ending after row n — the set is inclusive.
-
If m is greater than the total number of rows in the intermediate set and n >= m, an empty set is returned
-
If m is not greater than n and n is greater than the total number of rows in the intermediate set, the result set will be limited to rows starting from m, up to the end of the set
-
If m < 1 and n < 1, the
SELECT
statement call fails with an error -
If n = m - 1, an empty set is returned
-
If n < m - 1, the
SELECT
statement call fails with an error
Not Possible to Use a TO
Clause Without a ROWS
Clause:
While ROWS
is an alternative to the FIRST
and SKIP
syntax, there is one situation where the ROWS
syntax does not provide the same behaviour: specifying SKIP n
on its own returns the entire intermediate set, without the first n rows.
The ROWS … TO
syntax needs a little help to achieve this.
With the ROWS
syntax, you need a ROWS
clause in association with the TO
clause and deliberately make the second (n) argument greater than the size of the intermediate data set.
This is achieved by creating an expression for n that uses a subquery to retrieve the count of rows in the intermediate set and adds 1 to it, or use a literal with a sufficiently large value.
Replacing of FIRST
/SKIP
and OFFSET
/FETCH
The ROWS
clause can be used instead of the SQL-standard OFFSET
/FETCH
or non-standard FIRST
/SKIP
clauses, except the case where only OFFSET
or SKIP
is used, that is when the whole result set is returned except for skipping the specified number of rows from the beginning.
To implement this behaviour using ROWS
, you must specify the TO
clause with a value larger than the size of the returned result set.
Mixing ROWS
and FIRST
/SKIP
or OFFSET
/FETCH
ROWS
syntax cannot be mixed with FIRST
/SKIP
or OFFSET
/FETCH
in the same SELECT
expression.
Using the different syntaxes in different subqueries in the same statement is allowed.
ROWS
Syntax in UNION
Queries
When ROWS
is used in a UNION
query, the ROWS
directive is applied to the unioned set and must be placed after the last SELECT
statement.
If a need arises to limit the subsets returned by one or more SELECT
statements inside UNION
, there are a couple of options:
-
Use
FIRST
/SKIP
syntax in theseSELECT
statements — bearing in mind that an ordering clause (ORDER BY
) cannot be applied locally to the discrete queries, but only to the combined output. -
Convert the queries to derived tables with their own
ROWS
clauses. -
Use parenthesized query expressions with
OFFSET
/FETCH
Examples of ROWS
The following examples rewrite the examples used in the section about FIRST
and SKIP
, earlier in this chapter.
Retrieve the first ten names from the output of a sorted query on the PEOPLE
table:
SELECT id, name
FROM People
ORDER BY name ASC
ROWS 1 TO 10;
or its equivalent
SELECT id, name
FROM People
ORDER BY name ASC
ROWS 10;
Return all records from the PEOPLE
table except for the first 10 names:
SELECT id, name
FROM People
ORDER BY name ASC
ROWS 11 TO (SELECT COUNT(*) FROM People);
And this query will return the last 10 records (pay attention to the parentheses):
SELECT id, name
FROM People
ORDER BY name ASC
ROWS (SELECT COUNT(*) - 9 FROM People)
TO (SELECT COUNT(*) FROM People);
This one will return rows 81-100 from the PEOPLE
table:
SELECT id, name
FROM People
ORDER BY name ASC
ROWS 81 TO 100;
6.1.12. OFFSET
, FETCH
Retrieves a slice of rows from an ordered set
SELECT <columns> FROM ... [WHERE ...] [ORDER BY ...] [OFFSET <offset-fetch-expression> { ROW | ROWS }] [FETCH { FIRST | NEXT } [<offset-fetch-expression>] { ROW | ROWS } ONLY] <offset-fetch-expression> ::= <integer-literal> | <query-parameter>
Argument | Description |
---|---|
integer-literal |
Integer literal |
query-parameter |
Query parameter place-holder.
|
The OFFSET
and FETCH
clauses are an SQL standard-compliant equivalent for FIRST
/SKIP
, and an alternative for ROWS
.
The OFFSET
clause specifies the number of rows to skip.
The FETCH
clause specifies the number of rows to fetch.
When <offset-fetch-expression> is left out of the FETCH
clause (e.g. FETCH FIRST ROW ONLY
), one row will be fetched.
The choice between ROW
or ROWS
, or FIRST
or NEXT
in the clauses is just for aesthetic purposes (e.g. making the query more readable or grammatically correct).
There is no difference between OFFSET 10 ROW
or OFFSET 10 ROWS
, or FETCH NEXT 10 ROWS ONLY
or FETCH FIRST 10 ROWS ONLY
.
As with SKIP
and FIRST
, OFFSET
and FETCH
clauses can be applied independently, in both top-level and nested query expressions.
|
Examples of OFFSET
and FETCH
SELECT *
FROM T1
ORDER BY COL1
OFFSET 10 ROWS
SELECT *
FROM T1
ORDER BY COL1
FETCH FIRST 10 ROWS ONLY
OFFSET
and FETCH
clauses in a derived table and in the outer querySELECT *
FROM (
SELECT *
FROM T1
ORDER BY COL1 DESC
OFFSET 1 ROW
FETCH NEXT 10 ROWS ONLY
) a
ORDER BY a.COL1
FETCH FIRST ROW ONLY
The following examples rewrite the FIRST
/SKIP
examples and ROWS
examples earlier in this chapter.
Retrieve the first ten names from the output of a sorted query on the PEOPLE
table:
SELECT id, name
FROM People
ORDER BY name ASC
FETCH NEXT 10 ROWS ONLY;
Return all records from the PEOPLE
table except for the first 10 names:
SELECT id, name
FROM People
ORDER BY name ASC
OFFSET 10 ROWS;
And this query will return the last 10 records.
Contrary to FIRST
/SKIP
and ROWS
we cannot use expressions (including sub-queries).
To retrieve the last 10 rows, reverse the sort to the first (last) 10 rows, and then sort in the right order.
SELECT id, name
FROM (
SELECT id, name
FROM People
ORDER BY name DESC
FETCH FIRST 10 ROWS ONLY
) a
ORDER BY name ASC;
This one will return rows 81-100 from the PEOPLE
table:
SELECT id, name
FROM People
ORDER BY name ASC
OFFSET 80 ROWS
FETCH NEXT 20 ROWS;
6.1.13. FOR UPDATE [OF]
SELECT ... FROM single_table [WHERE ...] FOR UPDATE [OF <column-name-list>]
FOR UPDATE
does not do what its name suggests.
Its only effect currently is to disable the pre-fetch buffer.
It is likely to change in a future Firebird version: the plan is to validate cursors marked with |
The OF
sub-clause does not do anything at all, and is only provided for syntax compatibility with other database systems.
6.1.14. WITH LOCK
Applies limited pessimistic locking
SELECT ... FROM single_table [WHERE ...] [FOR UPDATE [OF <column_list>]] WITH LOCK [SKIP LOCKED]
WITH LOCK
provides a limited explicit pessimistic locking capability for cautious use in conditions where the affected row set is:
-
small (ideally singleton), and
-
precisely controlled by the application code.
This is for experts only!
The need for a pessimistic lock in Firebird is very rare indeed and should be well understood before use of this extension is considered. It is essential to understand the effects of transaction isolation and other transaction attributes before attempting to implement explicit locking in your application. |
If the WITH LOCK
clause succeeds, it will secure a lock on the selected rows and prevent any other transaction from obtaining write access to any of those rows, or their dependants, until your transaction ends.
WITH LOCK
can only be used with a top-level, single-table SELECT
statement.
It is not available:
-
in a subquery specification
-
for joined sets
-
with the
DISTINCT
operator, aGROUP BY
clause or any other aggregating operation -
with a view
-
with the output of a selectable stored procedure
-
with an external table
-
with a
UNION
query
As the engine considers, in turn, each record falling under an explicit lock statement, it returns either the record version that is the most currently committed, regardless of database state when the statement was submitted, or an exception.
When the optional SKIP LOCKED
clause is specified, records locked by a different transaction are skipped.
If a statement has both SKIP LOCKED
and OFFSET
/SKIP
/ROWS
subclauses, locked rows may be skipped before OFFSET
/SKIP
/ROWS
subclause can account for them, thus skipping more rows than specified in OFFSET
/SKIP
/ROWS
.
Wait behaviour and conflict reporting depend on the transaction parameters specified in the TPB block:
TPB mode | Behaviour |
---|---|
isc_tpb_consistency |
Explicit locks are overridden by implicit or explicit table-level locks and are ignored. |
isc_tpb_concurrency + isc_tpb_nowait |
If a record is modified by any transaction that was committed since the transaction attempting to get explicit lock started, or an active transaction has performed a modification of this record, an update conflict exception is raised immediately. |
isc_tpb_concurrency + isc_tpb_wait |
If the record is modified by any transaction that has committed since the transaction attempting to get explicit lock started, an update conflict exception is raised immediately. If an active transaction is holding ownership on this record (via explicit locking or by a normal optimistic write-lock) the transaction attempting the explicit lock waits for the outcome of the blocking transaction and, when it finishes, attempts to get the lock on the record again. This means that, if the blocking transaction committed a modified version of this record, an update conflict exception will be raised. |
isc_tpb_read_committed + isc_tpb_nowait |
If there is an active transaction holding ownership on this record (via explicit locking or normal update), an update conflict exception is raised immediately. |
isc_tpb_read_committed + isc_tpb_wait |
If there is an active transaction holding ownership on this record (via explicit locking or by a normal optimistic write-lock), the transaction attempting the explicit lock waits for the outcome of blocking transaction and when it finishes, attempts to get the lock on the record again. Update conflict exceptions can never be raised by an explicit lock statement in this TPB mode. |
Usage with a FOR UPDATE
Clause
If the FOR UPDATE
sub-clause precedes the WITH LOCK
sub-clause, buffered fetches are suppressed.
Thus, the lock will be applied to each row, one by one, at the moment it is fetched.
It becomes possible, then, that a lock which appeared to succeed when requested will nevertheless fail subsequently, when an attempt is made to fetch a row which has become locked by another transaction in the meantime.
This can be avoided by also using SKIP LOCKED
.
As an alternative, it may be possible in your access components to set the size of the fetch buffer to 1. This would enable you to process the currently-locked row before the next is fetched and locked, or to handle errors without rolling back your transaction. |
How the engine deals with WITH LOCK
When an UPDATE
statement tries to access a record that is locked by another transaction, it either raises an update conflict exception or waits for the locking transaction to finish, depending on TPB mode.
Engine behaviour here is the same as if this record had already been modified by the locking transaction.
No special error codes are returned from conflicts involving pessimistic locks.
The engine guarantees that all records returned by an explicit lock statement are locked and do meet the search conditions specified in WHERE
clause, as long as the search conditions do not depend on any other tables, via joins, subqueries, etc.
It also guarantees that rows not meeting the search conditions will not be locked by the statement.
It can not guarantee that there are no rows which, though meeting the search conditions, are not locked.
This situation can arise if other, parallel transactions commit their changes during the course of the locking statement’s execution. |
The engine locks rows at fetch time. This has important consequences if you lock several rows at once. Many access methods for Firebird databases default to fetching output in packets of a few hundred rows (“buffered fetches”). Most data access components cannot bring you the rows contained in the last-fetched packet, when an error occurred.
Caveats using WITH LOCK
-
Rolling back of an implicit or explicit savepoint releases record locks that were taken under that savepoint, but it doesn’t notify waiting transactions. Applications should not depend on this behaviour as it may get changed in the future.
-
While explicit locks can be used to prevent and/or handle unusual update conflict errors, the volume of deadlock errors will grow unless you design your locking strategy carefully and control it rigorously.
-
Most applications do not need explicit locks at all. The main purposes of explicit locks are:
-
to prevent expensive handling of update conflict errors in heavily loaded applications, and
-
to maintain integrity of objects mapped to a relational database in a clustered environment.
If your use of explicit locking doesn’t fall in one of these two categories, then it’s probably the wrong way to do the task in Firebird.
-
-
Explicit locking is an advanced feature; do not misuse it! While solutions for these kinds of problems may be important for websites handling thousands of concurrent writers, or for ERP/CRM systems operating in large corporations, most application programs do not need to work in such conditions.
6.1.15. OPTIMIZE FOR
SELECT ... [WITH LOCK [SKIP LOCKED]] OPTIMIZE FOR {FIRST | ALL} ROWS
The OPTIMIZE FOR
clause can only occur on a top-level SELECT
.
This feature allows the optimizer to consider another (hopefully better) plan if only a subset or rows is fetched initially by the user application (with the remaining rows being fetched on demand), thus improving the response time.
It can also be specified at the session level using the SET OPTIMIZE
management statement.
The default behaviour can be specified globally using the OptimizeForFirstRows setting in firebird.conf or databases.conf.
6.1.16. INTO
Passes SELECT
output into variables
PSQL
In PSQL the INTO
clause is placed at the end of the SELECT
statement.
SELECT [...] <column-list> FROM ... [...] [INTO <variable-list>] <variable-list> ::= <variable> [, <variable> ...] <variable> ::= varname | ':' varname
The colon (‘:
’) prefix for local variable names in PSQL is optional in the INTO
clause.
In PSQL code (triggers, stored procedures and executable blocks), the results of a SELECT
statement can be loaded row-by-row into local variables.
It is often the only way to do anything with the returned values at all, unless an explicit or implicit cursor name is specified.
The number, order and types of the variables must match the columns in the output row.
A “plain” SELECT
statement can only be used in PSQL if it returns at most one row, i.e. if it is a singleton select.
For multi-row selects, PSQL provides the FOR SELECT
loop construct, discussed later in the PSQL chapter.
PSQL also supports the DECLARE CURSOR
statement, which binds a named cursor to a SELECT
statement.
The cursor can then be used to walk the result set.
Examples
-
Selecting aggregated values and passing them into previously declared variables
min_amt
,avg_amt
andmax_amt
:select min(amount), avg(cast(amount as float)), max(amount) from orders where artno = 372218 into min_amt, avg_amt, max_amt;
The
CAST
serves to make the average a floating point number; otherwise, sinceamount
is presumably an integer field, SQL rules would truncate it to the nearest lower integer. -
A PSQL trigger that retrieves two values as a
BLOB
field (using theLIST()
function) and assigns itINTO
a third field:select list(name, ', ') from persons p where p.id in (new.father, new.mother) into new.parentnames;
6.1.17. Common Table Expressions (“WITH … AS … SELECT
”)
<query-expression> ::= [<with-clause>] <query-expression-body> [<order-by-clause>] [{ <rows-clause> | [<result-offset-clause>] [<fetch-first-clause>] }] <with-clause> ::= WITH [RECURSIVE] <with-list-element> [, <with-list-element> ...] <with-list-element> ::= query-name [(<column-name-list>)] AS (<query-expression>) <column-name-list> ::= column-name [, column-name ...]
Argument | Description |
---|---|
query-name |
Alias for a table expression |
column-name |
Alias for a column in a table expression |
A common table expression or CTE can be described as a virtual table or view, defined in a preamble to a main query, and going out of scope after the main query’s execution. The main query can reference any CTEs defined in the preamble as if they were regular tables or views. CTEs can be recursive, i.e. self-referencing, but they cannot be nested.
CTE Notes
-
A CTE definition can contain any legal query-expression, as long as it doesn’t have a “
WITH…
” preamble of its own (no nesting). -
CTEs defined for the same main query can reference each other, but care should be taken to avoid loops.
-
CTEs can be referenced from anywhere in the main query.
-
Each CTE can be referenced multiple times in the main query, using different aliases if necessary.
-
When enclosed in parentheses, CTE constructs can be used as subqueries in
SELECT
statements, but also inUPDATE
s,MERGE
s etc. -
In PSQL, CTEs are also supported in
FOR
loop headers:for with my_rivers as (select * from rivers where owner = 'me') select name, length from my_rivers into :rname, :rlen do begin .. end
Example
with dept_year_budget as (
select fiscal_year,
dept_no,
sum(projected_budget) as budget
from proj_dept_budget
group by fiscal_year, dept_no
)
select d.dept_no,
d.department,
dyb_2008.budget as budget_08,
dyb_2009.budget as budget_09
from department d
left join dept_year_budget dyb_2008
on d.dept_no = dyb_2008.dept_no
and dyb_2008.fiscal_year = 2008
left join dept_year_budget dyb_2009
on d.dept_no = dyb_2009.dept_no
and dyb_2009.fiscal_year = 2009
where exists (
select * from proj_dept_budget b
where d.dept_no = b.dept_no
);
Recursive CTEs
A recursive (self-referencing) CTE is a UNION
which must have at least one non-recursive member, called the anchor.
The non-recursive member(s) must be placed before the recursive member(s).
Recursive members are linked to each other and to their non-recursive neighbour by UNION ALL
operators.
The unions between non-recursive members may be of any type.
Recursive CTEs require the RECURSIVE
keyword to be present right after WITH
.
Each recursive union member may reference itself only once, and it must do so in a FROM
clause.
A great benefit of recursive CTEs is that they use far less memory and CPU cycles than an equivalent recursive stored procedure.
Execution Pattern
The execution pattern of a recursive CTE is as follows:
-
The engine begins execution from a non-recursive member.
-
For each row evaluated, it starts executing each recursive member one by one, using the current values from the outer row as parameters.
-
If the currently executing instance of a recursive member produces no rows, execution loops back one level and gets the next row from the outer result set.
Example of recursive CTEs
WITH RECURSIVE DEPT_YEAR_BUDGET AS (
SELECT
FISCAL_YEAR,
DEPT_NO,
SUM(PROJECTED_BUDGET) BUDGET
FROM PROJ_DEPT_BUDGET
GROUP BY FISCAL_YEAR, DEPT_NO
),
DEPT_TREE AS (
SELECT
DEPT_NO,
HEAD_DEPT,
DEPARTMENT,
CAST('' AS VARCHAR(255)) AS INDENT
FROM DEPARTMENT
WHERE HEAD_DEPT IS NULL
UNION ALL
SELECT
D.DEPT_NO,
D.HEAD_DEPT,
D.DEPARTMENT,
H.INDENT || ' '
FROM DEPARTMENT D
JOIN DEPT_TREE H ON H.HEAD_DEPT = D.DEPT_NO
)
SELECT
D.DEPT_NO,
D.INDENT || D.DEPARTMENT DEPARTMENT,
DYB_2008.BUDGET AS BUDGET_08,
DYB_2009.BUDGET AS BUDGET_09
FROM DEPT_TREE D
LEFT JOIN DEPT_YEAR_BUDGET DYB_2008 ON
(D.DEPT_NO = DYB_2008.DEPT_NO) AND
(DYB_2008.FISCAL_YEAR = 2008)
LEFT JOIN DEPT_YEAR_BUDGET DYB_2009 ON
(D.DEPT_NO = DYB_2009.DEPT_NO) AND
(DYB_2009.FISCAL_YEAR = 2009);
The next example returns the pedigree of a horse. The main difference is that recursion occurs simultaneously in two branches of the pedigree.
WITH RECURSIVE PEDIGREE (
CODE_HORSE,
CODE_FATHER,
CODE_MOTHER,
NAME,
MARK,
DEPTH)
AS (SELECT
HORSE.CODE_HORSE,
HORSE.CODE_FATHER,
HORSE.CODE_MOTHER,
HORSE.NAME,
CAST('' AS VARCHAR(80)),
0
FROM
HORSE
WHERE
HORSE.CODE_HORSE = :CODE_HORSE
UNION ALL
SELECT
HORSE.CODE_HORSE,
HORSE.CODE_FATHER,
HORSE.CODE_MOTHER,
HORSE.NAME,
'F' || PEDIGREE.MARK,
PEDIGREE.DEPTH + 1
FROM
HORSE
JOIN PEDIGREE
ON HORSE.CODE_HORSE = PEDIGREE.CODE_FATHER
WHERE
PEDIGREE.DEPTH < :MAX_DEPTH
UNION ALL
SELECT
HORSE.CODE_HORSE,
HORSE.CODE_FATHER,
HORSE.CODE_MOTHER,
HORSE.NAME,
'M' || PEDIGREE.MARK,
PEDIGREE.DEPTH + 1
FROM
HORSE
JOIN PEDIGREE
ON HORSE.CODE_HORSE = PEDIGREE.CODE_MOTHER
WHERE
PEDIGREE.DEPTH < :MAX_DEPTH
)
SELECT
CODE_HORSE,
NAME,
MARK,
DEPTH
FROM
PEDIGREE
-
Aggregates (
DISTINCT
,GROUP BY
,HAVING
) and aggregate functions (SUM
,COUNT
,MAX
etc) are not allowed in recursive union members. -
A recursive reference cannot participate in an outer join.
-
The maximum recursion depth is 1024.
6.1.18. Full SELECT
Syntax
The previous sections used incomplete or simplified fragments of the SELECT
syntax.
Following is the full syntax.
Where possible, the syntax below uses syntax names from the SQL standard, which do not necessarily match the syntax names in the Firebird source. In some cases, syntax productions have been collapsed, because the productions in the SQL standard are verbose as they are also used to add additional rules or definitions to a syntax element.
Although this is intended as the full syntax, some productions are not shown (e.g. <value-expression>) and assumed to be clear for the reader, and in some cases we take shortcuts like using query-name
or column-alias
for identifiers in a syntax production.
If you come across situations where these shortcuts do result in lack of clarity or other issues, let us know on https://github.com/FirebirdSQL/firebird-documentation or on firebird-devel.
The syntax below does not include the PSQL SELECT … INTO
syntax, which is essentially <cursor-specification> INTO <variable-list>
.
<cursor-specification> ::= <query-expression> [<updatability-clause>] [<lock-clause>] <query-expression> ::= [<with-clause>] <query-expression-body> [<order-by-clause>] [{ <rows-clause> | [<result-offset-clause>] [<fetch-first-clause>] }] <with-clause> ::= WITH [RECURSIVE] <with-list-element> [, <with-list-element> ...] <with-list-element> ::= query-name [(<column-name-list>)] AS (<query-expression>) <column-name-list> ::= column-name [, column-name ...] <query-expression-body> ::= <query-term> | <query-expression-body> UNION [{ DISTINCT | ALL }] <query-term> <query-term> ::= <query-primary> <query-primary> ::= <query-specification> | (<query-expression-body> [<order-by-clause>] [<result-offset-clause>] [<fetch-first-clause>]) <query-specification> ::= SELECT <limit-clause> [{ ALL | DISTINCT }] <select-list> FROM <table-reference> [, <table-reference> ...] [WHERE <search-condition>] [GROUP BY <value-expression> [, <value-expression> ...]] [HAVING <search-condition>] [WINDOW <window-definition> [, <window-definition> ...]] [PLAN <plan-expression>] <limit-clause> ::= [FIRST <limit-expression>] [SKIP <limit-expression>] <limit-expression> ::= <integer-literal> | <query-parameter> | (<value-expression>) <select-list> ::= * | <select-sublist> [, <select-sublist> ...] <select-sublist> ::= table-alias.* | <value-expression> [[AS] column-alias] <table-reference> ::= <table-primary> | <joined-table> <table-primary> ::= <table-or-query-name> [[AS] correlation-name] | [LATERAL] <derived-table> [<correlation-or-recognition>] | <parenthesized-joined-table> <table-or-query-name> ::= table-name | query-name | [package-name.]procedure-name [(<procedure-args>)] <procedure-args> ::= <value-expression [, <value-expression> ...] <correlation-or-recognition> ::= [AS] correlation-name [(<column-name-list>)] <derived-table> ::= (<query-expression>) <parenthesized-joined-table> ::= (<parenthesized-joined-table) | (<joined-table>) <joined-table> ::= <cross-join> | <natural-join> | <qualified-join> <cross-join> <table-reference> CROSS JOIN <table-primary> <natural-join> ::= <table-reference> NATURAL [<join-type>] JOIN <table-primary> <join-type> ::= INNER | { LEFT | RIGHT | FULL } [OUTER] <qualified-join> ::= <table-reference> [<join-type>] JOIN <table-primary> { ON <search-condition> | USING (<column-name-list>) } <window-definition> ::= new-window-name AS (<window-specification-details>) <window-specification-details> ::= [existing-window-name] [<window-partition-clause>] [<order-by-clause>] [<window-frame-clause>] <window-partition-clause> ::= PARTITION BY <value-expression> [, <value-expression> ...] <order-by-clause> ::= ORDER BY <sort-specification [, <sort-specification> ...] <sort-specification> ::= <value-expression> [<ordering-specification>] [<null-ordering>] <ordering-specification> ::= ASC | ASCENDING | DESC | DESCENDING <null-ordering> ::= NULLS FIRST | NULLS LAST <window-frame-clause> ::= { RANGE | ROWS } <window-frame-extent> <window-frame-extent> ::= <window-frame-start> | <window-frame-between> <window-frame-start> ::= UNBOUNDED PRECEDING | <value-expression> PRECEDING | CURRENT ROW <window-frame-between> ::= BETWEEN { UNBOUNDED PRECEDING | <value-expression> PRECEDING | CURRENT ROW | <value-expression> FOLLOWING } AND { <value-expression> PRECEDING | CURRENT ROW | <value-expression> FOLLOWING | UNBOUNDED FOLLOWING } <rows-clause> ::= ROWS <value-expression> [TO <value-expression>] <result-offset-clause> :: = OFFSET <offset-fetch-expression> { ROW | ROWS } <offset-fetch-expression> ::= <integer-literal> | <query-parameter> <fetch-first-clause> ::= [FETCH { FIRST | NEXT } [<offset-fetch-expression>] { ROW | ROWS } ONLY] <updatability-clause> ::= FOR UPDATE [OF <column-name-list>] <lock-clause> ::= WITH LOCK [SKIP LOCKED]
6.2. INSERT
Inserts rows of data into a table or updatable view
INSERT INTO target { DEFAULT VALUES | [(<column_list>)] [<override_opt>] <value-source> } [RETURNING <returning_list> [INTO <variables>]] <column_list> ::= col_name [, col_name ...] <override_opt> ::= OVERRIDING {USER | SYSTEM} VALUE <value-source> ::= VALUES (<value-list>) | <query-expression> <value-list> ::= <ins-value> [, <ins-value> ...] <ins-value> :: = <value-expression> | DEFAULT <returning_list> ::= * | <output_column> [, <output_column] <output_column> ::= target.* | <return_expression> [COLLATE collation] [[AS] alias] <return_expression> ::= <value-expression> | [target.]col_name <value-expression> ::= <literal> | <context-variable> | any other expression returning a single value of a Firebird data type or NULL <variables> ::= [:]varname [, [:]varname ...]
Argument | Description |
---|---|
target |
The name of the table or view to which a new row, or batch of rows, should be added |
col_name |
Name of a table or view column |
value-expression |
An expression whose value is used for inserting into the table or for returning |
return_expression |
The expression to be returned in the |
literal |
A literal |
context-variable |
Context variable |
varname |
Name of a PSQL local variable |
The INSERT
statement is used to add rows to a table or to one or more tables underlying a view:
-
If the column values are supplied in a
VALUES
clause, exactly one row is inserted -
The values may be provided instead by a
SELECT
expression, in which case zero to many rows may be inserted -
With the
DEFAULT VALUES
clause, no values are provided at all and exactly one row is inserted.
-
Columns returned to the
NEW.column_name
context variables in DML triggers should not have a colon (“:
”) prefixed to their names -
Columns may not appear more than once in the column list.
ALERT :
BEFORE INSERT TriggersRegardless of the method used for inserting rows, be mindful of any columns in the target table or view that are populated by |
6.2.1. INSERT … VALUES
The VALUES
list must provide a value for every column in the column list, in the same order and of the correct type.
The column list need not specify every column in the target but, if the column list is absent, the engine requires a value for every column in the table or view (computed columns excluded).
The expression DEFAULT
allows a column to be specified in the column list, but instructs Firebird to use the default value (either NULL
or the value specified in the DEFAULT
clause of the column definition).
For identity columns, specifying DEFAULT
will generate the identity value.
It is possible to include calculated columns in the column list and specifying DEFAULT
as the column value.
Introducer syntax provides a way to identify the character set of a value that is a string constant (literal). Introducer syntax works only with literal strings: it cannot be applied to string variables, parameters, column references or values that are expressions. |
INSERT INTO cars (make, model, year)
VALUES ('Ford', 'T', 1908);
INSERT INTO cars
VALUES ('Ford', 'T', 1908, 'USA', 850);
-- notice the '_' prefix (introducer syntax)
INSERT INTO People
VALUES (_ISO8859_1 'Hans-Jörg Schäfer');
6.2.2. INSERT … SELECT
For this method of inserting, the output columns of the SELECT
statement (or <query-expression>) must provide a value for every target column in the column list, in the same order and of the correct type.
Literal values, context variables or expressions of compatible type can be substituted for any column in the source row.
In this case, a source column list and a corresponding VALUES
list are required.
If the column list is absent — as it is when SELECT *
is used for the source expression — the column_list must contain the names of every column in the target table or view (computed columns excluded).
INSERT INTO cars (make, model, year)
SELECT make, model, year
FROM new_cars;
INSERT INTO cars
SELECT * FROM new_cars;
INSERT INTO Members (number, name)
SELECT number, name FROM NewMembers
WHERE Accepted = 1
UNION ALL
SELECT number, name FROM SuspendedMembers
WHERE Vindicated = 1
INSERT INTO numbers(num)
WITH RECURSIVE r(n) as (
SELECT 1 FROM rdb$database
UNION ALL
SELECT n+1 FROM r WHERE n < 100
)
SELECT n FROM r
Of course, the column names in the source table need not be the same as those in the target table.
Any type of SELECT
statement is permitted, as long as its output columns exactly match the insert columns in number, order and type.
Types need not be the same, but they must be assignment-compatible.
Since Firebird 5.0, an INSERT … SELECT
with a RETURNING
clause produces zero or more rows, and the statement is described as type isc_info_sql_stmt_select
.
In other words, an INSERT … SELECT … RETURNING
will no longer produce a “multiple rows in singleton select” error when the select produces multiple rows.
For the time being, a INSERT … VALUES (…)
or INSERT … DEFAULT VALUES
with a RETURNING
clause is still described as isc_info_sql_stmt_exec_procedure
.
This behaviour may change in a future Firebird version.
6.2.3. INSERT … DEFAULT VALUES
The DEFAULT VALUES
clause allows insertion of a record without providing any values at all, either directly or from a SELECT
statement.
This is only possible if every NOT NULL
or CHECK
ed column in the table either has a valid default declared or gets such a value from a BEFORE INSERT
trigger.
Furthermore, triggers providing required field values must not depend on the presence of input values.
Specifying DEFAULT VALUES
is equivalent to specifying a values list with expression DEFAULT
for all columns.
INSERT INTO journal
DEFAULT VALUES
RETURNING entry_id;
6.2.4. OVERRIDING
The OVERRIDING
clause controls the behaviour of an identity column for this statement only.
OVERRIDING SYSTEM VALUE
-
The user-provided value for the identity column is used, and no value is generated using the identity. In other words, for this insert, the identity will behave as if it is
GENERATED BY DEFAULT
. This option can only be specified for tables with aGENERATED ALWAYS
identity column.This can be useful when merging or importing data from another source. After such an insert, it may be necessary to change the next value of the identity sequence using
ALTER TABLE
to prevent subsequent inserts from generating colliding identity values. OVERRIDING USER VALUE
-
The user-provided value for the identity column is ignored, and the column value is generated using the identity. In other words, for this insert, the identity will behave as if it is
GENERATED ALWAYS
, while allowing the identity column in the column-list. This option can be specified for both types of identity columns.It is usually simpler to leave out the identity column to achieve the same effect.
OVERRIDING
-- for ALWAYS
-- value 11 is used anyway
insert into objects_always (id, name)
OVERRIDING SYSTEM VALUE values (11, 'Laptop');
-- for both ALWAYS and BY DEFAULT
-- value 12 is not used
insert into objects_default (id, name)
OVERRIDING USER VALUE values (12, 'Laptop');
6.2.5. The RETURNING
Clause
An INSERT
statement may optionally include a RETURNING
clause to return values from the inserted rows.
The clause, if present, need not contain all columns referenced in the insert statement and may also contain other columns or expressions.
The returned values reflect any changes that may have been made in BEFORE INSERT
triggers.
The user executing the statement needs to have SELECT
privileges on the columns specified in the RETURNING
clause.
The syntax of the returning_list is similar to the column list of a SELECT
clause.
It is possible to reference all columns using *
or table_name.*
.
The optional INTO
sub-clause is only valid in PSQL.
Caveats for updatable views
The values reported by |
Multiple
INSERT sIn DSQL, an In PSQL, if the |
INSERT INTO Scholars (firstname, lastname, address,
phone, email)
VALUES ('Henry', 'Higgins', '27A Wimpole Street',
'3231212', NULL)
RETURNING lastname, fullname, id;
INSERT INTO Scholars (firstname, lastname, address,
phone, email)
VALUES (
'Henry', 'Higgins', '27A Wimpole Street',
'3231212', NULL)
RETURNING *;
INSERT INTO Dumbbells (firstname, lastname, iq)
SELECT fname, lname, iq
FROM Friends
ORDER BY iq ROWS 1
RETURNING id, firstname, iq
INTO :id, :fname, :iq;
-
In DSQL, an
INSERT … VALUES (…) RETURNING
always returns exactly one row. This behaviour may change in a future Firebird version. -
In DSQL, an
INSERT … DEFAULT VALUES RETURNING
always returns exactly one row. -
In DSQL, an
INSERT … SELECT … RETURNING
returns zero or more rows. -
In PSQL, if multiple rows are returned, the statement fails with a “multiple rows in singleton select” error. This behaviour may change in a future Firebird version.
-
In PSQL, if no row was inserted, nothing is returned, and the target variables keep their existing values.
6.2.6. Inserting into BLOB
columns
Inserting into BLOB
columns is only possible under the following circumstances:
-
The client application has made special provisions for such inserts, using the Firebird API. In this case, the modus operandi is application-specific and outside the scope of this manual.
-
The value inserted is a string literal of no more than 65,533 bytes (64KB - 3).
A limit, in characters, is calculated at run-time for strings that are in multibyte character sets, to avoid overrunning the bytes limit. For example, for a UTF8 string (max. 4 bytes/character), the run-time limit is likely to be about (floor(65533/4)) = 16383 characters.
-
You are using the “
INSERT … SELECT
” form and one or more columns in the result set areBLOB
s.
6.3. UPDATE
Updates existing rows in tables and updatable views
UPDATE target [[AS] alias] SET col_name = <upd_value> [, col_name = <upd_value> ...] [WHERE {<search-conditions> | CURRENT OF cursorname}] [PLAN <plan_items>] [ORDER BY <sort_items>] [ROWS m [TO n]] [SKIP LOCKED] [RETURNING <returning_list> [INTO <variables>]] <upd_value> ::= <value-expression> | DEFAULT <returning_list> ::= * | <output_column> [, <output_column] <output_column> ::= target.* | NEW.* | OLD.* | <return_expression> [COLLATE collation] [[AS] alias] <return_expression> ::= <value-expression> | [target.]col_name | NEW.col_name | OLD.col_name <value-expression> ::= <literal> | <context-variable> | any other expression returning a single value of a Firebird data type or NULL <variables> ::= [:]varname [, [:]varname ...]
Argument | Description |
---|---|
target |
The name of the table or view where the records are updated |
alias |
Alias for the table or view |
col_name |
Name or alias of a column in the table or view |
value-expression |
Expression for the new value for a column that is to be updated in the table or view by the statement, or a value to be returned |
search-conditions |
A search condition limiting the set of the rows to be updated |
cursorname |
The name of the cursor through which the row(s) to be updated are positioned |
plan_items |
Clauses in the query plan |
sort_items |
Columns listed in an |
m, n |
Integer expressions for limiting the number of rows to be updated |
return_expression |
A value to be returned in the |
literal |
A literal |
context-variable |
Context variable |
varname |
Name of a PSQL local variable |
The UPDATE
statement changes values in a table or in one or more of the tables that underlie a view.
The columns affected are specified in the SET
clause.
The rows affected may be limited by the WHERE
and ROWS
clauses.
If neither WHERE
nor ROWS
is present, all records in the table will be updated.
6.3.1. Using an alias
If you assign an alias to a table or a view, the alias must be used when specifying columns and also in any column references included in other clauses.
Example
Correct usage:
update Fruit set soort = 'pisang' where ...
update Fruit set Fruit.soort = 'pisang' where ...
update Fruit F set soort = 'pisang' where ...
update Fruit F set F.soort = 'pisang' where ...
Not possible:
update Fruit F set Fruit.soort = 'pisang' where ...
6.3.2. The SET
Clause
In the SET
clause, the assignment expressions, containing the columns with the values to be set, are separated by commas.
In an assignment expression, column names are on the left and the values or expressions to assign are on the right.
A column may be assigned only once in the SET
clause.
A column name can be used in expressions on the right.
The old value of the column will always be used in these right-side values, even if the column was already assigned a new value earlier in the SET
clause.
Using the expression DEFAULT
will set the column to its default value (either NULL
or the value specified on the DEFAULT
clause of the column definition).
For an identity column, specifying DEFAULT
will generate a new identity value.
It is possible to “update” calculated columns in the SET
clause if and only if the assigned value is DEFAULT
.
It is not possible to assign DEFAULT
as a parameter value.
Data in the TSET
table:
A B
---
1 0
2 0
The statement:
UPDATE tset SET a = 5, b = a;
will change the values to:
A B
---
5 1
5 2
Notice that the old values (1 and 2) are used to update the b column even after the column was assigned a new value (5).
6.3.3. The WHERE
Clause
The WHERE
clause sets the conditions that limit the set of records for a searched update.
In PSQL, if a named cursor is being used for updating a set, using the WHERE CURRENT OF
clause, the action is limited to the row where the cursor is currently positioned.
This is a positioned update.
To be able to use the WHERE CURRENT OF
clause in DSQL, the cursor name needs to be set on the statement handle before executing the statement.
UPDATE People
SET firstname = 'Boris'
WHERE lastname = 'Johnson';
UPDATE employee e
SET salary = salary * 1.05
WHERE EXISTS(
SELECT *
FROM employee_project ep
WHERE e.emp_no = ep.emp_no);
UPDATE addresses
SET city = 'Saint Petersburg', citycode = 'PET'
WHERE city = 'Leningrad'
UPDATE employees
SET salary = 2.5 * salary
WHERE title = 'CEO'
For string literals with which the parser needs help to interpret the character set of the data, the introducer syntax may be used. The string literal is preceded by the character set name, prefixed with an underscore character:
-- notice the '_' prefix
UPDATE People
SET name = _ISO8859_1 'Hans-Jörg Schäfer'
WHERE id = 53662;
6.3.4. The ORDER BY
and ROWS
Clauses
The ORDER BY
and ROWS
clauses make sense only when used together.
However, they can be used separately.
If ROWS
has one argument, m, the rows to be updated will be limited to the first m rows.
-
If m > the number of rows being processed, the entire set of rows is updated
-
If m = 0, no rows are updated
-
If m < 0, an error occurs and the update fails
If two arguments are used, m and n, ROWS
limits the rows being updated to rows from m to n inclusively.
Both arguments are integers and start from 1.
-
If m > the number of rows being processed, no rows are updated
-
If n > the number of rows, rows from m to the end of the set are updated
-
If m < 1 or n < 1, an error occurs and the update fails
-
If n = m - 1, no rows are updated
-
If n < m -1, an error occurs and the update fails
UPDATE employees
SET salary = salary + 50
ORDER BY salary ASC
ROWS 20;
6.3.5. The SKIP LOCKED
Clause
When the SKIP LOCKED
clause is specified, records locked by a different transaction are skipped by the statement and are not updated.
When a ROWS
clause is specified, the “skip locked” check is performed after skipping the requested number of rows specified, and before counting the number of rows to update.
6.3.6. The RETURNING
Clause
An UPDATE
statement may include RETURNING
to return some values from the updated rows.
RETURNING
may include data from any column of the row, not only the columns that are updated by the statement.
It can include literals or expressions not associated with columns, if there is a need for that.
The user executing the statement needs to have SELECT
privileges on the columns specified in the RETURNING
clause.
When the RETURNING
set contains data from the current row, the returned values report changes made in the BEFORE UPDATE
triggers, but not those made in AFTER UPDATE
triggers.
The context variables OLD.fieldname
and NEW.fieldname
can be used as column names.
If OLD.
or NEW.
is not specified, or if the table name (target) is specified instead, the column values returned are the NEW.
ones.
The syntax of the returning_list is similar to the column list of a SELECT
clause.
It is possible to reference all columns using *
, or table_name.*
, NEW.*
and/or OLD.*
.
In DSQL, a positioned update statement (WHERE CURRENT OF …
) with RETURNING
always returns a single row, a normal update statement can return zero or more rows.
The update is executed to completion before rows are returned.
In PSQL, attempts to execute an UPDATE … RETURNING
that affects multiple rows will result in the error “multiple rows in singleton select”.
This behaviour may change in a future Firebird version.
6.3.7. Updating BLOB
columns
Updating a BLOB
column always replaces the entire contents.
Even the BLOB
ID, the “handle” that is stored directly in the column, is changed.
BLOB
s can be updated if:
-
The client application has made special provisions for this operation, using the Firebird API. In this case, the modus operandi is application-specific and outside the scope of this manual.
-
The new value is a string literal of no more than 65,533 bytes (64KB - 3).
A limit, in characters, is calculated at run-time for strings that are in multi-byte character sets, to avoid overrunning the bytes limit. For example, for a UTF8 string (max. 4 bytes/character), the run-time limit is likely to be about (floor(65533/4)) = 16383 characters.
-
The source is itself a
BLOB
column or, more generally, an expression that returns aBLOB
. -
You use the
INSERT CURSOR
statement (ESQL only).
6.4. UPDATE OR INSERT
Updates existing rows in a table or updatable view, or — if it does not exist — inserts it
UPDATE OR INSERT INTO target [(<column_list>)] [<override_opt>] VALUES (<value_list>) [MATCHING (<column_list>)] [ORDER BY <ordering-list>] [ROWS <m> [TO <n>]] [RETURNING <returning_list> [INTO <variables>]] <column_list> ::= col_name [, col_name ...] <override_opt> ::= OVERRIDING {USER | SYSTEM} VALUE <value_list> ::= <ins_value> [, <ins_value> ...] <ins_value> ::= <value> | DEFAULT <returning_list> ::= * | <output_column> [, <output_column] <output_column> ::= target.* | NEW.* | OLD.* | <return_expression> [COLLATE collation] [[AS] alias] <return_expression> ::= <value-expression> | [target.]col_name | NEW.col_name | OLD.col_name <value-expression> ::= <literal> | <context-variable> | any other expression returning a single value of a Firebird data type or NULL <variables> ::= [:]varname [, [:]varname ...]
Argument | Description |
---|---|
target |
The name of the table or view where the record(s) is to be updated or a new record inserted |
col_name |
Name of a column in the table or view |
value-expression |
An expression whose value is to be used for inserting or updating the table, or returning a value |
return_expression |
An expression returned in the RETURNING clause |
varname |
Variable name — PSQL only |
UPDATE OR INSERT
inserts a new record or updates one or more existing records.
The action taken depends on the values provided for the columns in the MATCHING
clause (or, if the latter is absent, in the primary key).
If there are records found matching those values, they are updated.
If not, a new record is inserted.
A match only counts if all the columns in the MATCHING
clause or primary key columns are equal.
Matching is done with the IS NOT DISTINCT
operator, so one NULL
matches another.
-
If the table has no primary key, the
MATCHING
clause is mandatory. -
In the
MATCHING
list as well as in the update/insert column list, each column name may occur only once. -
The “
INTO <variables>
” subclause is only available in PSQL. -
When values are returned into the context variable
NEW
, this name must not be preceded by a colon (“:
”).
6.4.1. The ORDER BY
and ROWS
Clauses
See The ORDER BY
and ROWS
Clauses for UPDATE
.
6.4.2. The RETURNING
Clause
The optional RETURNING
clause, if present, need not contain all the columns mentioned in the statement and may also contain other columns or expressions.
The returned values reflect any changes that may have been made in BEFORE
triggers, but not those in AFTER
triggers.
OLD.fieldname
and NEW.fieldname
may both be used in the list of columns to return;
for field names not preceded by either of these, the new value is returned.
The user executing the statement needs to have SELECT
privileges on the columns specified in the RETURNING
clause.
The syntax of the returning_list is similar to the column list of a SELECT
clause.
It is possible to reference all columns using *
, or table_name.*
, NEW.*
and/or OLD.*
.
In DSQL, a statement with a RETURNING
clause can return zero or more rows.
The update or insert is executed to completion before rows are returned.
In PSQL, if a RETURNING
clause is present and more than one matching record is found, an error “multiple rows in singleton select” is raised.
This behaviour may change in a future Firebird version.
The optional INTO
sub-clause is only valid in PSQL.
6.4.3. Example of UPDATE OR INSERT
Modifying data in a table, using UPDATE OR INSERT
in a PSQL module.
The return value is passed to a local variable, whose colon prefix is optional.
UPDATE OR INSERT INTO Cows (Name, Number, Location)
VALUES ('Suzy Creamcheese', 3278823, 'Green Pastures')
MATCHING (Number)
RETURNING rec_id into :id;
UPDATE OR INSERT INTO Cows (Name, Number, Location)
VALUES ('Suzy Creamcheese', 3278823, 'Green Pastures')
MATCHING (Number)
RETURNING old.*, new.*;
6.5. DELETE
Deletes rows from a table or updatable view
DELETE FROM target [[AS] alias] [WHERE {<search-conditions> | CURRENT OF cursorname}] [PLAN <plan_items>] [ORDER BY <sort_items>] [ROWS m [TO n]] [SKIP LOCKED] [RETURNING <returning_list> [INTO <variables>]] <returning_list> ::= * | <output_column> [, <output_column] <output_column> ::= target.* | <return_expression> [COLLATE collation] [[AS] alias] <return_expression> ::= <value-expression> | [target.]col_name <value-expression> ::= <literal> | <context-variable> | any other expression returning a single value of a Firebird data type or NULL <variables> ::= [:]varname [, [:]varname ...]
Argument | Description |
---|---|
target |
The name of the table or view from which the records are to be deleted |
alias |
Alias for the target table or view |
search-conditions |
Search condition limiting the set of rows being targeted for deletion |
cursorname |
The name of the cursor in which current record is positioned for deletion |
plan_items |
Query plan clause |
sort_items |
|
m, n |
Integer expressions for limiting the number of rows being deleted |
return_expression |
An expression to be returned in the |
value-expression |
An expression whose value is used for returning |
varname |
Name of a PSQL variable |
DELETE
removes rows from a database table or from one or more of the tables that underlie a view.
WHERE
and ROWS
clauses can limit the number of rows deleted.
If neither WHERE
nor ROWS
is present, DELETE
removes all the rows in the relation.
6.5.1. Aliases
If an alias is specified for the target table or view, it must be used to qualify all field name references in the DELETE
statement.
Examples
Supported usage:
delete from Cities where name starting 'Alex';
delete from Cities where Cities.name starting 'Alex';
delete from Cities C where name starting 'Alex';
delete from Cities C where C.name starting 'Alex';
Not possible:
delete from Cities C where Cities.name starting 'Alex';
6.5.2. WHERE
The WHERE
clause sets the conditions that limit the set of records for a searched delete.
In PSQL, if a named cursor is being used for deleting a set, using the WHERE CURRENT OF
clause, the action is limited to the row where the cursor is currently positioned.
This is a positioned delete.
To be able to use the WHERE CURRENT OF
clause in DSQL, the cursor name needs to be set on the statement handle before executing the statement.
Examples
DELETE FROM People
WHERE firstname <> 'Boris' AND lastname <> 'Johnson';
DELETE FROM employee e
WHERE NOT EXISTS(
SELECT *
FROM employee_project ep
WHERE e.emp_no = ep.emp_no);
DELETE FROM Cities
WHERE CURRENT OF Cur_Cities; -- ESQL and PSQL only
6.5.3. PLAN
A PLAN
clause allows the user to optimize the operation manually.
DELETE FROM Submissions
WHERE date_entered < '1-Jan-2002'
PLAN (Submissions INDEX ix_subm_date);
6.5.4. ORDER BY
and ROWS
The ORDER BY
clause orders the set before the actual deletion takes place.
It only makes sense in combination with ROWS
, but is also valid without it.
The ROWS
clause limits the number of rows being deleted.
Integer literals or any integer expressions can be used for the arguments m and n.
If ROWS
has one argument, m, the rows to be deleted will be limited to the first m rows.
-
If m > the number of rows being processed, the entire set of rows is deleted
-
If m = 0, no rows are deleted
-
If m < 0, an error occurs and the deletion fails
If two arguments are used, m and n, ROWS
limits the rows being deleted to rows from m to n inclusively.
Both arguments are integers and start from 1.
-
If m > the number of rows being processed, no rows are deleted
-
If m > 0 and <= the number of rows in the set and n is outside these values, rows from m to the end of the set are deleted
-
If m < 1 or n < 1, an error occurs and the deletion fails
-
If n = m - 1, no rows are deleted
-
If n < m -1, an error occurs and the deletion fails
Examples
Deleting the oldest purchase:
DELETE FROM Purchases
ORDER BY date ROWS 1;
Deleting the highest custno(s):
DELETE FROM Sales
ORDER BY custno DESC ROWS 1 to 10;
Deleting all sales, ORDER BY
clause pointless:
DELETE FROM Sales
ORDER BY custno DESC;
Deleting one record starting from the end, i.e. from Z…:
DELETE FROM popgroups
ORDER BY name DESC ROWS 1;
Deleting the five oldest groups:
DELETE FROM popgroups
ORDER BY formed ROWS 5;
No sorting (ORDER BY
) is specified so 8 found records, starting from the fifth one, will be deleted:
DELETE FROM popgroups
ROWS 5 TO 12;
6.5.5. SKIP LOCKED
When the SKIP LOCKED
clause is specified, records locked by a different transaction are skipped by the statement and are not deleted.
When a ROWS
clause is specified, the “skip locked” check is performed after skipping the requested number of rows specified, and before counting the number of rows to delete.
6.5.6. RETURNING
A DELETE
statement may optionally include a RETURNING
clause to return values from the deleted rows.
The clause, if present, need not contain all the relation’s columns and may also contain other columns or expressions.
The user executing the statement needs to have SELECT
privileges on the columns specified in the RETURNING
clause.
The syntax of the returning_list is similar to the column list of a SELECT
clause.
It is possible to reference all columns using *
, or table_name.*
.
-
In DSQL, a positioned delete statement (
WHERE CURRENT OF …
) withRETURNING
always returns a singleton, never a multi-row set. If no records is deleted, the returned columns containNULL
. -
A normal
DELETE
statement can return zero or more rows; the deletion is executed to completion before rows are returned. -
In PSQL, if a
RETURNING
clause is present and more than one matching record is found, an error “multiple rows in singleton select” is raised. This behaviour may change in a future Firebird version. -
The
INTO
clause is available only in PSQL-
If no row is deleted, nothing is returned and the target variables keep their values
-
Examples
DELETE FROM Scholars
WHERE firstname = 'Henry' and lastname = 'Higgins'
RETURNING lastname, fullname, id;
DELETE FROM Scholars
WHERE firstname = 'Henry' and lastname = 'Higgins'
RETURNING *;
DELETE FROM Dumbbells
ORDER BY iq DESC
ROWS 1
RETURNING lastname, iq into :lname, :iq;
6.6. MERGE
Merges data from a source set into a target table or updatable view
MERGE INTO target [[AS] target_alias] USING <table-reference> ON <join_condition> <merge_when> [<merge_when> ...] [PLAN <plan-expr>] [ORDER BY <ordering-list>] [RETURNING <returning_list> [INTO <variables>]] <merge_when> ::= <merge_when_matched> | <merge_when_not_matched_target> | <merge_when_not_matched_source> <merge_when_matched> ::= WHEN MATCHED [AND <condition>] THEN { UPDATE SET <assignment-list> | DELETE } <merge_when_not_matched_target> ::= WHEN NOT MATCHED [BY TARGET] [AND <condition>] THEN INSERT [( <column_list> )] [<override_opt>] VALUES ( <value_list> ) <merge_when_not_matched_source> ::= WHEN NOT MATCHED BY SOURCE [ AND <condition> ] THEN { UPDATE SET <assignment-list> | DELETE } <table-reference> ::= <table-primary> | <joined-table> <table-primary> ::= <table-or-query-name> [[AS] correlation-name] | [LATERAL] <derived-table> [<correlation-or-recognition>] | <parenthesized-joined-table> <assignment_list ::= col_name = <m_value> [, <col_name> = <m_value> ...]] <override_opt> ::= OVERRIDING {USER | SYSTEM} VALUE <column_list> ::= colname [, colname ...] <value_list> ::= <m_value> [, <m_value> ...] <m_value> ::= <value-expression> | DEFAULT <returning_list> ::= * | <output_column> [, <output_column] <output_column> ::= target.* | NEW.* | OLD.* | <return_expression> [COLLATE collation] [[AS] alias] <return_expression> ::= <value-expression> | [target.]col_name | NEW.col_name | OLD.col_name <value-expression> ::= <literal> | <context-variable> | any other expression returning a single value of a Firebird data type or NULL <variables> ::= [:]varname [, [:]varname ...]
Argument | Description |
---|---|
target |
Name of target relation (table or updatable view) |
table-reference |
Data source. It can be a table, a view, a stored procedure, a derived table or a parenthesized joined table |
target_alias |
Alias for the target relation (table or updatable view) |
join_conditions |
The ( |
condition |
Additional test condition in |
col_name |
Name of a column in the target relation |
value-expression |
The value assigned to a column in the target table. This expression may be a literal value, a PSQL variable, a column from the source, or a compatible context variable |
return_expression |
The expression to be returned in the |
ret_alias |
Alias for the value expression in the |
varname |
Name of a PSQL local variable |
The MERGE
statement merges records from a source <table-reference> into a target table or updatable view.
The source may be a table, view or “anything you can SELECT
from” in general.
Each source record will be used to update one or more target records, insert a new record in the target table, delete a record from the target table or do nothing.
The action taken depends on the supplied join condition, the WHEN
clause(s), and the — optional — condition in the WHEN
clause.
The join condition and condition in the WHEN
will typically contain a comparison of fields in the source and target relations.
Multiple WHEN MATCHED
and WHEN NOT MATCHED
clauses are allowed.
For each row in the source, the WHEN
clauses are checked in the order they are specified in the statement.
If the condition in the WHEN
clause does not evaluate to true, the clause is skipped, and the next clause will be checked.
This will be done until the condition for a WHEN
clause evaluates to true, or a WHEN
clauses without condition matches, or there are no more WHEN
clauses.
If a matching clause is found, the action associated with the clause is executed.
For each row in the source, at most one action is executed.
If the WHEN MATCHED
clause is present, and several records match a single record in the target table, an error is raised.
Contrary to the other WHEN
clauses, the WHEN NOT MATCHED BY SOURCE
clauses evaluates records in the target which match no record in source.
At least one
Currently, in PSQL, the |
6.6.1. The ORDER BY
Clause
The ORDER BY
can be used to influence the order in which rows are evaluated.
The primary use case is when combined with RETURNING
, to influence the order rows are returned.
6.6.2. The RETURNING
Clause
A MERGE
statement can contain a RETURNING
clause to return rows added, modified or removed.
The merge is executed to completion before rows are returned.
The RETURNING
clause can contain any columns from the target table (or updatable view), as well as other columns (eg from the source) and expressions.
The user executing the statement needs to have SELECT
privileges on the columns specified in the RETURNING
clause.
In PSQL, If a RETURNING
clause is present and more than one matching record is found, an error “multiple rows in singleton select” is raised.
This behaviour may change in a future Firebird version.
The optional INTO
sub-clause is only valid in PSQL.
Column names can be qualified by the OLD
or NEW
prefix to define exactly what value to return: before or after modification. The returned values include the changes made by BEFORE
triggers.
The syntax of the returning_list is similar to the column list of a SELECT
clause.
It is possible to reference all columns using *
, or table_name.*
, NEW.*
and/or OLD.*
.
For the UPDATE
or INSERT
action, unqualified column names, or those qualified by the target table name or alias will behave as if qualified by NEW
, while for the DELETE
action as if qualified by OLD
.
The following example modifies the previous example to affect one line, and adds a RETURNING
clause to return the old and new quantity of goods, and the difference between those values.
MERGE
with a RETURNING
clauseMERGE INTO PRODUCT_INVENTORY AS TARGET
USING (
SELECT
SL.ID_PRODUCT,
SUM(SL.QUANTITY)
FROM SALES_ORDER_LINE SL
JOIN SALES_ORDER S ON S.ID = SL.ID_SALES_ORDER
WHERE S.BYDATE = CURRENT_DATE
AND SL.ID_PRODUCT =: ID_PRODUCT
GROUP BY 1
) AS SRC (ID_PRODUCT, QUANTITY)
ON TARGET.ID_PRODUCT = SRC.ID_PRODUCT
WHEN MATCHED AND TARGET.QUANTITY - SRC.QUANTITY <= 0 THEN
DELETE
WHEN MATCHED THEN
UPDATE SET
TARGET.QUANTITY = TARGET.QUANTITY - SRC.QUANTITY,
TARGET.BYDATE = CURRENT_DATE
RETURNING OLD.QUANTITY, NEW.QUANTITY, SRC.QUANTITY
INTO : OLD_QUANTITY, :NEW_QUANTITY, :DIFF_QUANTITY
6.6.3. Examples of MERGE
-
Update books when present, or add new record if absent
MERGE INTO books b USING purchases p ON p.title = b.title and p.type = 'bk' WHEN MATCHED THEN UPDATE SET b.desc = b.desc || '; ' || p.desc WHEN NOT MATCHED THEN INSERT (title, desc, bought) values (p.title, p.desc, p.bought);
-
Using a derived table
MERGE INTO customers c USING (SELECT * from customers_delta WHERE id > 10) cd ON (c.id = cd.id) WHEN MATCHED THEN UPDATE SET name = cd.name WHEN NOT MATCHED THEN INSERT (id, name) values (cd.id, cd.name);
-
Together with a recursive CTE
MERGE INTO numbers USING ( WITH RECURSIVE r(n) AS ( SELECT 1 FROM rdb$database UNION ALL SELECT n+1 FROM r WHERE n < 200 ) SELECT n FROM r ) t ON numbers.num = t.n WHEN NOT MATCHED THEN INSERT(num) VALUES(t.n);
-
Using
DELETE
clauseMERGE INTO SALARY_HISTORY USING ( SELECT EMP_NO FROM EMPLOYEE WHERE DEPT_NO = 120) EMP ON SALARY_HISTORY.EMP_NO = EMP.EMP_NO WHEN MATCHED THEN DELETE
-
The following example updates the
PRODUCT_INVENTORY
table daily based on orders processed in theSALES_ORDER_LINE
table. If the stock level of the product would drop to zero or lower, then the row for that product is removed from thePRODUCT_INVENTORY
table.MERGE INTO PRODUCT_INVENTORY AS TARGET USING ( SELECT SL.ID_PRODUCT, SUM (SL.QUANTITY) FROM SALES_ORDER_LINE SL JOIN SALES_ORDER S ON S.ID = SL.ID_SALES_ORDER WHERE S.BYDATE = CURRENT_DATE GROUP BY 1 ) AS SRC (ID_PRODUCT, QUANTITY) ON TARGET.ID_PRODUCT = SRC.ID_PRODUCT WHEN MATCHED AND TARGET.QUANTITY - SRC.QUANTITY <= 0 THEN DELETE WHEN MATCHED THEN UPDATE SET TARGET.QUANTITY = TARGET.QUANTITY - SRC.QUANTITY, TARGET.BYDATE = CURRENT_DATE
6.7. EXECUTE PROCEDURE
Executes a stored procedure
EXECUTE PROCEDURE procname [{ <inparam-list | ( <inparam-list> ) }] [RETURNING_VALUES { <outvar-list> | ( <outvar-list ) }] <inparam-list> ::= <inparam> [, <inparam> ...] <outvar-list> ::= <outvar> [, <outvar> ...] <outvar> ::= [:]varname
Argument | Description |
---|---|
procname |
Name of the stored procedure |
inparam |
An expression evaluating to the declared data type of an input parameter |
varname |
A PSQL variable to receive the return value |
Executes an executable stored procedure, taking a list of one or more input parameters, if they are defined for the procedure, and returning a one-row set of output values, if they are defined for the procedure.
6.7.1. “Executable” Stored Procedure
The EXECUTE PROCEDURE
statement is most commonly used to invoke “executable” stored procedures to perform some data-modifying task at the server side — those that do not contain any SUSPEND
statements in their code.
They can be designed to return a result set, consisting of only one row, which is usually passed, via a set of RETURNING_VALUES()
variables, to another stored procedure that calls it.
Client interfaces usually have an API wrapper that can retrieve the output values into a single-row buffer when calling EXECUTE PROCEDURE
in DSQL.
Invoking “selectable” stored procedures is also possible with EXECUTE PROCEDURE
, but it returns only the first row of an output set which is almost surely designed to be multi-row.
Selectable stored procedures are designed to be invoked by a SELECT
statement, producing output that behaves like a virtual table.
|
6.7.2. Examples of EXECUTE PROCEDURE
-
In PSQL, with optional colons and without optional parentheses:
EXECUTE PROCEDURE MakeFullName :FirstName, :MiddleName, :LastName RETURNING_VALUES :FullName;
-
In Firebird’s command-line utility isql, with literal parameters and optional parentheses:
EXECUTE PROCEDURE MakeFullName ('J', 'Edgar', 'Hoover');
In DSQL (e.g. in isql),
RETURNING_VALUES
is not used. Any output values are captured by the application and displayed automatically. -
A PSQL example with expression parameters and optional parentheses:
EXECUTE PROCEDURE MakeFullName ('Mr./Mrs. ' || FirstName, MiddleName, upper(LastName)) RETURNING_VALUES (FullName);
6.8. EXECUTE BLOCK
Creates an “anonymous” block of PSQL code in DSQL for immediate execution
DSQL
EXECUTE BLOCK [(<inparams>)] [RETURNS (<outparams>)] <psql-module-body> <inparams> ::= <param_decl> = ? [, <inparams> ] <outparams> ::= <param_decl> [, <outparams>] <param_decl> ::= paramname <domain_or_non_array_type> [NOT NULL] [COLLATE collation] <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !! <psql-module-body> ::= !! See Syntax of a Module Body !!
Argument | Description |
---|---|
param_decl |
Name and description of an input or output parameter |
paramname |
The name of an input or output parameter of the procedural block, up to 63 characters long. The name must be unique among input and output parameters and local variables in the block |
collation |
Collation |
Executes a block of PSQL code as if it were a stored procedure, optionally with input and output parameters and variable declarations. This allows the user to perform “on-the-fly” PSQL within a DSQL context.
6.8.1. Examples
-
This example injects the numbers 0 through 127 and their corresponding ASCII characters into the table
ASCIITABLE
:EXECUTE BLOCK AS declare i INT = 0; BEGIN WHILE (i < 128) DO BEGIN INSERT INTO AsciiTable VALUES (:i, ascii_char(:i)); i = i + 1; END END
-
The next example calculates the geometric mean of two numbers and returns it to the user:
EXECUTE BLOCK (x DOUBLE PRECISION = ?, y DOUBLE PRECISION = ?) RETURNS (gmean DOUBLE PRECISION) AS BEGIN gmean = SQRT(x*y); SUSPEND; END
Because this block has input parameters, it has to be prepared first. Then the parameters can be set and the block executed. It depends on the client software how this must be done and even if it is possible at all — see the notes below.
-
Our last example takes two integer values,
smallest
andlargest
. For all the numbers in the rangesmallest
…largest
, the block outputs the number itself, its square, its cube and its fourth power.EXECUTE BLOCK (smallest INT = ?, largest INT = ?) RETURNS (number INT, square BIGINT, cube BIGINT, fourth BIGINT) AS BEGIN number = smallest; WHILE (number <= largest) DO BEGIN square = number * number; cube = number * square; fourth = number * cube; SUSPEND; number = number + 1; END END
Again, it depends on the client software if and how you can set the parameter values.
6.8.2. Input and output parameters
Executing a block without input parameters should be possible with every Firebird client that allows the user to enter their own DSQL statements. If there are input parameters, things get trickier: these parameters must get their values after the statement is prepared, but before it is executed. This requires special provisions, which not every client application offers. (Firebird’s own isql, for one, doesn’t.)
The server only accepts question marks (“?
”) as placeholders for the input values, not “:a
”, “:MyParam
” etc., or literal values.
Client software may support the “:xxx
” form though, and will preprocess it before sending it to the server.
If the block has output parameters, you must use SUSPEND
or nothing will be returned.
Output is always returned in the form of a result set, just as with a SELECT
statement.
You can’t use RETURNING_VALUES
or execute the block INTO
some variables, even if there is only one result row.
6.8.3. Statement Terminators
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly some third-party editors — employ an internal convention that requires all statements to be terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these environments. If you are unacquainted with this problem and its solution, please study the details in the PSQL chapter in the section entitled Switching the Terminator in isql.
7. Procedural SQL (PSQL) Statements
Procedural SQL (PSQL) is a procedural extension of SQL. This language subset is used for writing PSQL modules: stored procedures, stored functions, triggers, and PSQL blocks.
PSQL provides all the basic constructs of traditional structured programming languages, and also includes DML statements (SELECT
, INSERT
, UPDATE
, DELETE
, etc.), with a slightly modified syntax in some cases.
7.1. Elements of PSQL
A PSQL module may contain declarations of local variables, subroutines and cursors, assignments, conditional statements, loops, statements for raising custom exceptions, error handling and sending messages (events) to client applications.
DML triggers have access to special context variables, two “records” that store, respectively, the NEW
values for all columns during insert and update activity, and the OLD
values during update and delete work, and three Boolean variables — INSERTING
, UPDATING
and DELETING
— to determine the event that fired the trigger.
Statements that modify metadata (DDL) are not available in PSQL.
7.1.1. DML Statements with Parameters
If DML statements (SELECT
, INSERT
, UPDATE
, DELETE
, etc.) in the body of a module (procedure, function, trigger or block) use parameters, only named parameters can be used.
If DML statements contain named parameters, then they must be previously declared as local variables using DECLARE [VARIABLE]
in the declaration section of the module, or as input or output variables in the module header.
When a DML statement with parameters is included in PSQL code, the parameter name must be prefixed by a colon (‘:
’) in most situations.
The colon is optional in statement syntax that is specific to PSQL, such as assignments and conditionals and the INTO
clause.
The colon prefix on parameters is not required when calling stored procedures from within another PSQL module.
7.1.2. Transactions
Stored procedures and functions (including those defined in packages) are executed in the context of the transaction in which they are called. Triggers are executed as an intrinsic part of the operation of the DML statement: thus, their execution is within the same transaction context as the statement itself. Individual transactions are launched for database event triggers fired on connect or disconnect.
Statements that start and end transactions are not available in PSQL, but it is possible to run a statement or a block of statements in an autonomous transaction.
7.1.3. Module Structure
PSQL code modules consist of a header and a body.
The DDL statements for defining them are complex statements;
that is, they consist of a single statement that encloses blocks of multiple statements.
These statements begin with a verb (CREATE
, ALTER
, DROP
, RECREATE
, CREATE OR ALTER
, or EXECUTE BLOCK
) and end with the last END
statement of the body.
The Module Header
The header provides the module name and defines any input and output parameters or — for functions — the return type.
Stored procedures and PSQL blocks may have input and output parameters.
Functions may have input parameters and must have a scalar return type.
Triggers do not have either input or output parameters, but DML triggers do have the NEW
and OLD
“records”, and INSERTING
, UPDATING
and DELETING
variables.
The header of a trigger indicates the DML event (insert, update or delete, or a combination) or DDL or database event and the phase of operation (BEFORE
or AFTER
that event) that will cause it to “fire”.
The Module Body
The module body is either a PSQL module body, or an external module body. PSQL blocks can only have a PSQL module body.
<module-body> ::= <psql-module-body> | <external-module-body> <psql-module-body> ::= AS [<forward-declarations>] [<declarations>] BEGIN [<PSQL_statements>] END <external-module-body> ::= EXTERNAL [NAME <extname>] ENGINE engine [AS '<extbody>'] <forward-declarations> ::= <forward-declare-item> [<forward-declare-item> ...] <declarations> ::= <declare-item> [<declare-item> ...] <forward-declare-item> ::= <subfunc-forward> | <subproc-forward> <declare-item> ::= <declare-var> | <declare-cursor> | <subfunc-def> | <subproc-def> <extname> ::= '<module-name>!<routine-name>[!<misc-info>]' <declare-var> ::= !! See DECLARE VARIABLE !! <declare-cursor> ::= !! See DECLARE .. CURSOR !! <subfunc-forward>, <subfunc-def> ::= !! See DECLARE FUNCTION !! <subproc-forward>, <subproc-def> ::= !! See DECLARE PROCEDURE !!
Parameter | Description |
---|---|
declarations |
Section for declaring local variables, named cursors, and subroutines |
PSQL_statements |
Procedural SQL statements.
Some PSQL statements may not be valid in all types of PSQL.
For example, |
subfunc-forward |
Sub-function forward declaration |
subproc-forward |
Sub-procedure forward declaration |
declare_var |
Local variable declaration |
declare_cursor |
Named cursor declaration |
subfunc-def |
Sub-function declaration |
subproc-def |
Sub-procedure declaration |
extname |
String identifying the external procedure |
engine |
String identifying the UDR engine |
extbody |
External procedure body. A string literal that can be used by UDRs for various purposes. |
module-name |
The name of the module that contains the procedure |
routine-name |
The internal name of the procedure inside the external module |
misc-info |
Optional string that is passed to the procedure in the external module |
The PSQL Module Body
The PSQL module body starts with an optional section that declares variables and subroutines, followed by a block of statements that run in a logical sequence, like a program.
A block of statements — or compound statement — is enclosed by the BEGIN
and END
keywords, and is executed as a single unit of code.
The main BEGIN…END
block may contain any number of other BEGIN…END
blocks, both embedded and sequential.
Blocks can be nested to a maximum depth of 512 blocks.
All statements except BEGIN
and END
are terminated by semicolons (‘;
’).
No other character is valid for use as a terminator for PSQL statements.
The External Module Body
The external module body specifies the UDR engine used to execute the external module, and optionally specifies the name of the UDR routine to call (<extname>) and/or a string (<extbody>) with UDR-specific semantics.
Configuration of external modules and UDR engines is not covered further in this Language Reference. Consult the documentation of a specific UDR engine for details.
7.2. Stored Procedures
A stored procedure is executable code stored in the database metadata for execution on the server. It can be called by other stored procedures (including itself), functions, triggers and client applications. A procedure that calls itself is known as recursive.
7.2.1. Benefits of Stored Procedures
Stored procedures have the following advantages:
- Modularity
-
applications working with the database can use the same stored procedure, thereby reducing the size of the application code and avoiding code duplication.
- Simpler Application Support
-
when a stored procedure is modified, changes appear immediately to all host applications, without the need to recompile them if the parameters were unchanged.
- Enhanced Performance
-
since stored procedures are executed on a server instead of at the client, network traffic is reduced, which improves performance.
7.2.2. Types of Stored Procedures
Firebird supports two types of stored procedures: executable and selectable.
Executable Procedures
Executable procedures usually modify data in a database.
They can receive input parameters and return a single set of output (RETURNS
) parameters.
They are called using the EXECUTE PROCEDURE
statement.
See an example of an executable stored procedure at the end of the CREATE PROCEDURE
section of Chapter 5, Data Definition (DDL) Statements.
Selectable Procedures
Selectable stored procedures usually retrieve data from a database, returning an arbitrary number of rows to the caller. The caller receives the output one row at a time from a row buffer that the database engine prepares for it.
Selectable procedures can be useful for obtaining complex sets of data that are often impossible or too difficult or too slow to retrieve using regular DSQL SELECT
queries.
Typically, this style of procedure iterates through a looping process of extracting data, perhaps transforming it before filling the output variables (parameters) with fresh data at each iteration of the loop.
A SUSPEND
statement at the end of the iteration fills the buffer and waits for the caller to fetch the row.
Execution of the next iteration of the loop begins when the buffer has been cleared.
Selectable procedures may have input parameters, and the output set is specified by the RETURNS
clause in the header.
A selectable stored procedure is called with a SELECT
statement.
See an example of a selectable stored procedure at the end of the CREATE PROCEDURE
section of Chapter 5, Data Definition (DDL) Statements.
7.2.3. Creating a Stored Procedure
The syntax for creating executable stored procedures and selectable stored procedures is the same.
The difference comes in the logic of the program code, specifically the absence or presence of a SUSPEND
statement.
For information about creating stored procedures, see CREATE PROCEDURE
in Chapter 5, Data Definition (DDL) Statements.
7.2.4. Modifying a Stored Procedure
For information about modifying existing stored procedures, see ALTER PROCEDURE
, CREATE OR ALTER PROCEDURE
, RECREATE PROCEDURE
.
7.2.5. Dropping a Stored Procedure
For information about dropping (deleting) stored procedures, see DROP PROCEDURE
.
7.3. Stored Functions
A stored function is executable code stored in the database metadata for execution on the server. It can be called by other stored functions (including itself), procedures, triggers, and client applications through DML statements. A function that calls itself is known as recursive.
Unlike stored procedures, stored functions always return one scalar value.
To return a value from a stored function, use the RETURN
statement, which immediately terminates the function.
7.3.1. Creating a Stored Function
For information about creating stored functions, see CREATE FUNCTION
in Chapter 5, Data Definition (DDL) Statements.
7.3.2. Modifying a Stored Function
For information about modifying stored functions, see ALTER FUNCTION
, CREATE OR ALTER FUNCTION
, RECREATE FUNCTION
.
7.3.3. Dropping a Stored Function
For information about dropping (deleting) stored functions, see DROP FUNCTION
.
7.4. PSQL Blocks
A self-contained, unnamed (“anonymous”) block of PSQL code can be executed dynamically in DSQL, using the EXECUTE BLOCK
syntax.
The header of a PSQL block may optionally contain input and output parameters.
The body may contain local variables, cursor declarations and local routines, followed by a block of PSQL statements, and is similar to a stored procedure.
A PSQL block cannot use a UDR module body.
A PSQL block is not defined and stored as an object, unlike stored procedures and triggers. It executes in run-time and cannot reference itself.
Like stored procedures, anonymous PSQL blocks can be used to process data and to retrieve data from the database.
EXECUTE BLOCK [(<inparam> = ? [, <inparam> = ? ...])] [RETURNS (<outparam> [, <outparam> ...])] <psql-module-body> <psql-module-body> ::= !! See Syntax of Module Body !!
Argument | Description |
---|---|
inparam |
Input parameter description |
outparam |
Output parameter description |
declarations |
A section for declaring local variables and named cursors |
PSQL statements |
PSQL and DML statements |
See EXECUTE BLOCK
for details.
7.5. Packages
A package is a group of stored procedures and functions defined as a single database object.
Firebird packages are made up of two parts: a header (PACKAGE
keyword) and a body (PACKAGE BODY
keywords).
This separation is similar to Delphi modules;
the header corresponds to the interface part, and the body corresponds to the implementation part.
7.5.1. Benefits of Packages
The notion of “packaging” the code components of a database operation addresses has several advantages:
- Modularisation
-
Blocks of interdependent code are grouped into logical modules, as done in other programming languages.
In programming, it is well recognised that grouping code in various ways, in namespaces, units or classes, for example, is a good thing. This is not possible with standard stored procedures and functions in the database. Although they can be grouped in different script files, two problems remain:
-
The grouping is not represented in the database metadata.
-
Scripted routines all participate in a flat namespace and are callable by everyone (we are not referring to security permissions here).
-
- Easier tracking of dependencies
-
Packages make it easy to track dependencies between a collection of related routines, as well as between this collection and other routines, both packaged and unpackaged.
Whenever a packaged routine determines that it uses a certain database object, a dependency on that object is registered in Firebird’s system tables. Thereafter, to drop, or maybe alter that object, you first need to remove what depends on it. Since the dependency on other objects only exists for the package body, and not the package header, this package body can easily be removed, even if another object depends on this package. When the body is dropped, the header remains, allowing you to recreate its body once the changes related to the removed object are done.
- Simplify permission management
-
As Firebird — by default — runs routines with the caller (invoker) privileges, it is necessary also to grant resource usage to each routine when these resources would not be directly accessible to the caller. Usage of each routine needs to be granted to users and/or roles.
Packaged routines do not have individual privileges. The privileges apply to the package as a whole. Privileges granted to packages are valid for all package body routines, including private ones, but are stored for the package header. An
EXECUTE
privilege on a package granted to a user (or other object), grants that user the privilege to execute all routines defined in the package header.For exampleGRANT SELECT ON TABLE secret TO PACKAGE pk_secret; GRANT EXECUTE ON PACKAGE pk_secret TO ROLE role_secret;
- Private scopes
-
Stored procedures and functions can be privates; that is, make them available only for internal usage within the defining package.
All programming languages have the notion of routine scope, which is not possible without some form of grouping. Firebird packages also work like Delphi units in this regard. If a routine is not declared in the package header (interface) and is implemented in the body (implementation), it becomes a private routine. A private routine can only be called from inside its package.
7.5.2. Creating a Package
For information on creating packages, see CREATE PACKAGE
, and CREATE PACKAGE BODY
in Chapter 5, Data Definition (DDL) Statements.
7.5.3. Modifying a Package
For information on modifying existing package header or bodies, see ALTER PACKAGE
, CREATE OR ALTER PACKAGE
, RECREATE PACKAGE
, and RECREATE PACKAGE BODY
.
7.5.4. Dropping a Package
For information on dropping (deleting) a package, see DROP PACKAGE
, and DROP PACKAGE BODY
.
7.6. Triggers
A trigger is another form of executable code that is stored in the metadata of the database for execution by the server. A trigger cannot be called directly. It is called automatically (“fired”) when data-changing events involving one particular table or view occur, or on a specific database or DDL event.
A trigger applies to exactly one table or view or database event, and only one phase in an event (BEFORE
or AFTER
the event).
A single DML trigger might be written to fire only when one specific data-changing event occurs (INSERT
, UPDATE
or DELETE
), or it might be written to apply to more than one of those.
A DML trigger is executed in the context of the transaction in which the data-changing DML statement is running. For triggers that respond to database events, the rule is different: for DDL triggers and transaction triggers, the trigger runs in the same transaction that executed the DDL, for other types, a new default transaction is started.
7.6.1. Firing Order (Order of Execution)
More than one trigger can be defined for each phase-event combination.
The order in which they are executed — also known as “firing order” — can be specified explicitly with the optional POSITION
argument in the trigger definition.
You have 32,767 numbers to choose from.
Triggers with the lowest position numbers fire first.
If a POSITION
clause is omitted, the position is 0
.
If multiple triggers have the same position and phase, those triggers will be executed in an undefined order, while respecting the total order by position and phase.
7.6.2. DML Triggers
DML triggers are those that fire when a DML operation changes the state of data: updating rows in tables, inserting new rows or deleting rows. They can be defined for both tables and views.
Trigger Options
Six base options are available for the event-phase combination for tables and views:
Before a new row is inserted |
|
After a new row is inserted |
|
Before a row is updated |
|
After a row is updated |
|
Before a row is deleted |
|
After a row is deleted |
|
These base forms are for creating single phase/single-event triggers.
Firebird also supports forms for creating triggers for one phase and multiple-events, BEFORE INSERT OR UPDATE OR DELETE
, for example, or AFTER UPDATE OR DELETE
: the combinations are your choice.
“Multiphase” triggers, such as BEFORE OR AFTER …
, are not possible.
OLD
and NEW
Context Variables
For DML triggers, the Firebird engine provides access to sets of OLD
and NEW
context variables (or, “records”).
Each is a record of the values of the entire row: one for the values as they are before the data-changing event (the BEFORE
phase) and one for the values as they will be after the event (the AFTER
phase).
They are referenced in statements using the form NEW.column_name
and OLD.column_name
, respectively.
The column_name can be any column in the table’s definition, not just those that are being updated.
The NEW
and OLD
variables are subject to some rules:
-
In all triggers,
OLD
is read-only -
In
BEFORE UPDATE
andBEFORE INSERT
code, theNEW
value is read/write, unless it is aCOMPUTED BY
column -
In
INSERT
triggers, references toOLD
are invalid and will throw an exception -
In
DELETE
triggers, references toNEW
are invalid and will throw an exception -
In all
AFTER
trigger code,NEW
is read-only
7.6.3. Database Triggers
A trigger associated with a database or transaction event can be defined for the following events:
Connecting to a database |
|
Before the trigger is executed, a transaction is automatically started with the default isolation level (snapshot (concurrency), write, wait) |
Disconnecting from a database |
|
Before the trigger is executed, a transaction is automatically started with the default isolation level (snapshot (concurrency), write, wait) |
When a transaction is started |
|
The trigger is executed in the transaction context of the started transaction (immediately after start) |
When a transaction is committed |
|
The trigger is executed in the transaction context of the committing transaction (immediately before commit) |
When a transaction is cancelled |
|
The trigger is executed in the transaction context of the rolling back transaction (immediately before roll back) |
7.6.4. DDL Triggers
DDL triggers fire on specified metadata change events in a specified phase.
BEFORE
triggers run before changes to system tables.
AFTER
triggers run after changes to system tables.
DDL triggers are a specific type of database trigger, so most rules for and semantics of database triggers also apply for DDL triggers.
Semantics
-
BEFORE
triggers are fired before changes to the system tables.AFTER
triggers are fired after system table changes.Important RuleThe event type
[BEFORE | AFTER]
of a DDL trigger cannot be changed. -
When a DDL statement fires a trigger that raises an exception (
BEFORE
orAFTER
, intentionally or unintentionally) the statement will not be committed. That is, exceptions can be used to ensure that a DDL operation will fail if the conditions are not precisely as intended. -
DDL trigger actions are executed only when committing the transaction in which the affected DDL command runs. Never overlook the fact that what is possible to do in an
AFTER
trigger is exactly what is possible to do after a DDL command without autocommit. You cannot, for example, create a table and then use it in the trigger. -
With “
CREATE OR ALTER
” statements, a trigger is fired one time at theCREATE
event or theALTER
event, according to the previous existence of the object. WithRECREATE
statements, a trigger is fired for theDROP
event if the object exists, and for theCREATE
event. -
ALTER
andDROP
events are generally not fired when the object name does not exist. For the exception, see point 6. -
The exception to rule 5 is that
BEFORE ALTER/DROP USER
triggers fire even when the username does not exist. This is because, underneath, these commands perform DML on the security database, and the verification is not done before the command on it is run. This is likely to be different with embedded users, so do not write code that depends on this. -
If an exception is raised after the DDL command starts its execution and before
AFTER
triggers are fired,AFTER
triggers will not be fired. -
Packaged procedures and functions do not fire individual
{CREATE | ALTER | DROP} {PROCEDURE | FUNCTION}
triggers.
7.6.5. Creating Triggers
For information on creating triggers, see CREATE TRIGGER
, CREATE OR ALTER TRIGGER
, and RECREATE TRIGGER
in Chapter 5, Data Definition (DDL) Statements.
7.6.6. Modifying Triggers
For information on modifying triggers, see ALTER TRIGGER
, CREATE OR ALTER TRIGGER
, and RECREATE TRIGGER
.
7.6.7. Dropping a Trigger
For information on dropping (deleting) triggers, see DROP TRIGGER
.
7.7. Writing the Body Code
This section takes a closer look at the procedural SQL language constructs and statements that are available for coding the body of a stored procedure, functions, trigger, and PSQL blocks.
7.7.1. Assignment Statements
Assigns a value to a variable
varname = <value_expr>;
Argument | Description |
---|---|
varname |
Name of a parameter or local variable |
value_expr |
An expression, constant or variable whose value resolves to the same data type as varname |
PSQL uses the equal symbol (‘=
’) as its assignment operator.
The assignment statement assigns a SQL expression value on the right to the variable on the left of the operator.
The expression can be any valid SQL expression: it may contain literals, internal variable names, arithmetic, logical and string operations, calls to internal functions, stored functions or external functions (UDFs).
Example using assignment statements
CREATE PROCEDURE MYPROC (
a INTEGER,
b INTEGER,
name VARCHAR (30)
)
RETURNS (
c INTEGER,
str VARCHAR(100))
AS
BEGIN
-- assigning a constant
c = 0;
str = '';
SUSPEND;
-- assigning expression values
c = a + b;
str = name || CAST(b AS VARCHAR(10));
SUSPEND;
-- assigning expression value built by a query
c = (SELECT 1 FROM rdb$database);
-- assigning a value from a context variable
str = CURRENT_USER;
SUSPEND;
END
7.7.2. Management Statements in PSQL
Management statement are allowed in PSQL modules (triggers, procedures, functions and PSQL blocks), which is especially helpful for applications that need management statements to be executed at the start of a session, specifically in ON CONNECT
triggers.
The management statements permitted in PSQL are:
Example of Management Statements in PSQL
create or alter trigger on_connect on connect
as
begin
set bind of decfloat to double precision;
set time zone 'America/Sao_Paulo';
end
Although useful as a workaround, using |
7.7.3. DECLARE VARIABLE
Declares a local variable
DECLARE [VARIABLE] varname <domain_or_non_array_type> [NOT NULL] [COLLATE collation] [{DEFAULT | = } <initvalue>]; <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !! <initvalue> ::= <literal> | <context_var>
Argument | Description |
---|---|
varname |
Name of the local variable |
collation |
Collation |
initvalue |
Initial value for this variable |
literal |
Literal of a type compatible with the type of the local variable |
context_var |
Any context variable whose type is compatible with the type of the local variable |
The statement DECLARE [VARIABLE]
is used for declaring a local variable.
One DECLARE [VARIABLE]
statement is required for each local variable.
Any number of DECLARE [VARIABLE]
statements can be included and in any order.
The name of a local variable must be unique among the names of local variables and input and output parameters declared for the module.
A special case of DECLARE [VARIABLE]
— declaring cursors — is covered separately in DECLARE .. CURSOR
Data Type for Variables
A local variable can be of any SQL type other than an array.
-
A domain name can be specified as the type; the variable will inherit all of its attributes.
-
If the
TYPE OF domain
clause is used instead, the variable will inherit only the domain’s data type, and, if applicable, its character set and collation attributes. Any default value or constraints such asNOT NULL
orCHECK
constraints are not inherited. -
If the
TYPE OF COLUMN relation.column
option is used to “borrow” from a column in a table or view, the variable will inherit only the column’s data type, and, if applicable, its character set and collation attributes. Any other attributes are ignored.
NOT NULL
Constraint
For local variables, you can specify the NOT NULL
constraint, disallowing NULL
values for the variable.
If a domain has been specified as the data type and the domain already has the NOT NULL
constraint, the declaration is unnecessary.
For other forms, including use of a domain that is nullable, the NOT NULL
constraint can be included if needed.
CHARACTER SET
and COLLATE
clauses
Unless specified, the character set and collation of a string variable will be the database defaults.
A CHARACTER SET
clause can be specified to handle string data that needs a different character set.
A valid collation (COLLATE
clause) can also be included, with or without the character set clause.
Initializing a Variable
Local variables are NULL
when execution of the module begins.
They can be explicitly initialized so that a starting or default value is available when they are first referenced.
The initial value can be specified in two ways, DEFAULT <initvalue>
and = <initvalue>
.
The value can be any type-compatible literal or context variable, including NULL
.
Be sure to use the |
Examples of various ways to declare local variables
CREATE OR ALTER PROCEDURE SOME_PROC
AS
-- Declaring a variable of the INT type
DECLARE I INT;
-- Declaring a variable of the INT type that does not allow NULL
DECLARE VARIABLE J INT NOT NULL;
-- Declaring a variable of the INT type with the default value of 0
DECLARE VARIABLE K INT DEFAULT 0;
-- Declaring a variable of the INT type with the default value of 1
DECLARE VARIABLE L INT = 1;
-- Declaring a variable based on the COUNTRYNAME domain
DECLARE FARM_COUNTRY COUNTRYNAME;
-- Declaring a variable of the type equal to the COUNTRYNAME domain
DECLARE FROM_COUNTRY TYPE OF COUNTRYNAME;
-- Declaring a variable with the type of the CAPITAL column in the COUNTRY table
DECLARE CAPITAL TYPE OF COLUMN COUNTRY.CAPITAL;
BEGIN
/* PSQL statements */
END
7.7.4. DECLARE .. CURSOR
Declares a named cursor
DECLARE [VARIABLE] cursor_name [[NO] SCROLL] CURSOR FOR (<select>);
Argument | Description |
---|---|
cursor_name |
Cursor name |
select |
|
The DECLARE … CURSOR … FOR
statement binds a named cursor to the result set obtained by the SELECT
statement specified in the FOR
clause.
In the body code, the cursor can be opened, used to iterate row-by-row through the result set, and closed.
While the cursor is open, the code can perform positioned updates and deletes using the WHERE CURRENT OF
in the UPDATE
or DELETE
statement.
Syntactically, the DECLARE … CURSOR
statement is a special case of DECLARE VARIABLE
.
Forward-Only and Scrollable Cursors
The cursor can be forward-only (unidirectional) or scrollable.
The optional clause SCROLL
makes the cursor scrollable, the NO SCROLL
clause, forward-only.
By default, cursors are forward-only.
Forward-only cursors can — as the name implies — only move forward in the dataset.
Forward-only cursors only support the FETCH [NEXT FROM]
statement, other fetch options raise an error.
Scrollable cursors allow you to move not only forward in the dataset, but also back, as well as N positions relative to the current position.
Scrollable cursors are materialized as a temporary dataset, as such, they consume additional memory or disk space, so use them only when you really need them. |
Cursor Idiosyncrasies
-
The optional
FOR UPDATE
clause can be included in theSELECT
statement, but its absence does not prevent successful execution of a positioned update or delete -
Care should be taken to ensure that the names of declared cursors do not conflict with any names used subsequently in statements for
AS CURSOR
clauses -
If the cursor is needed only to walk the result set, it is nearly always easier and less error-prone to use a
FOR SELECT
statement with theAS CURSOR
clause. Declared cursors must be explicitly opened, used to fetch data, and closed. The context variableROW_COUNT
has to be checked after each fetch and, if its value is zero, the loop has to be terminated. AFOR SELECT
statement does this automatically.Nevertheless, declared cursors provide a high level of control over sequential events and allow several cursors to be managed in parallel.
-
The
SELECT
statement may contain parameters. For instance:SELECT NAME || :SFX FROM NAMES WHERE NUMBER = :NUM
Each parameter has to have been declared beforehand as a PSQL variable, or as input or output parameters. When the cursor is opened, the parameter is assigned the current value of the variable.
Unstable Variables and Cursors
If the value of the PSQL variable used in the Note particularly that the behaviour may depend on the query plan, specifically on the indexes being used. Currently, there are no strict rules for this behaviour, and this may change in future versions of Firebird. |
Examples Using Named Cursors
-
Declaring a named cursor in a trigger.
CREATE OR ALTER TRIGGER TBU_STOCK BEFORE UPDATE ON STOCK AS DECLARE C_COUNTRY CURSOR FOR ( SELECT COUNTRY, CAPITAL FROM COUNTRY ); BEGIN /* PSQL statements */ END
-
Declaring a scrollable cursor
EXECUTE BLOCK RETURNS ( N INT, RNAME CHAR(63)) AS - Declaring a scrollable cursor DECLARE C SCROLL CURSOR FOR ( SELECT ROW_NUMBER() OVER (ORDER BY RDB$RELATION_NAME) AS N, RDB$RELATION_NAME FROM RDB$RELATIONS ORDER BY RDB$RELATION_NAME); BEGIN / * PSQL statements * / END
-
A collection of scripts for creating views with a PSQL block using named cursors.
EXECUTE BLOCK RETURNS ( SCRIPT BLOB SUB_TYPE TEXT) AS DECLARE VARIABLE FIELDS VARCHAR(8191); DECLARE VARIABLE FIELD_NAME TYPE OF RDB$FIELD_NAME; DECLARE VARIABLE RELATION RDB$RELATION_NAME; DECLARE VARIABLE SOURCE TYPE OF COLUMN RDB$RELATIONS.RDB$VIEW_SOURCE; DECLARE VARIABLE CUR_R CURSOR FOR ( SELECT RDB$RELATION_NAME, RDB$VIEW_SOURCE FROM RDB$RELATIONS WHERE RDB$VIEW_SOURCE IS NOT NULL); -- Declaring a named cursor where -- a local variable is used DECLARE CUR_F CURSOR FOR ( SELECT RDB$FIELD_NAME FROM RDB$RELATION_FIELDS WHERE -- the variable must be declared earlier RDB$RELATION_NAME = :RELATION); BEGIN OPEN CUR_R; WHILE (1 = 1) DO BEGIN FETCH CUR_R INTO :RELATION, :SOURCE; IF (ROW_COUNT = 0) THEN LEAVE; FIELDS = NULL; -- The CUR_F cursor will use the value -- of the RELATION variable initiated above OPEN CUR_F; WHILE (1 = 1) DO BEGIN FETCH CUR_F INTO :FIELD_NAME; IF (ROW_COUNT = 0) THEN LEAVE; IF (FIELDS IS NULL) THEN FIELDS = TRIM(FIELD_NAME); ELSE FIELDS = FIELDS || ', ' || TRIM(FIELD_NAME); END CLOSE CUR_F; SCRIPT = 'CREATE VIEW ' || RELATION; IF (FIELDS IS NOT NULL) THEN SCRIPT = SCRIPT || ' (' || FIELDS || ')'; SCRIPT = SCRIPT || ' AS ' || ASCII_CHAR(13); SCRIPT = SCRIPT || SOURCE; SUSPEND; END CLOSE CUR_R; END
7.7.5. DECLARE FUNCTION
Declares a sub-function
<subfunc-forward> ::= <subfunc-header>;
<subfunc-def> ::= <subfunc-header> <psql-module-body>
<subfunc-header> ::=
DECLARE FUNCTION subfuncname [ ( [ <in_params> ] ) ]
RETURNS <domain_or_non_array_type> [COLLATE collation]
[DETERMINISTIC]
<in_params> ::=
!! See CREATE FUNCTION
Syntax !!
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
<psql-module-body> ::=
!! See Syntax of Module Body !!
Argument | Description |
---|---|
subfuncname |
Sub-function name |
collation |
Collation name |
The DECLARE FUNCTION
statement declares a sub-function.
A sub-function is only visible to the PSQL module that defined the sub-function.
A sub-function can use variables, but not cursors, from its parent module. It can access other routines from its parent modules, including recursive calls to itself.
Sub-functions have a number of restrictions:
-
A sub-function cannot be nested in another subroutine. Subroutines are only supported in top-level PSQL modules (stored procedures, stored functions, triggers and PSQL blocks). This restriction is not enforced by the syntax, but attempts to create nested sub-functions will raise an error “feature is not supported” with detail message “nested sub function”.
-
Currently, a sub-function has no direct access to use cursors from its parent module.
A sub-function can be forward declared to resolve mutual dependencies between subroutines, and must be followed by its actual definition. When a sub-function is forward declared and has parameters with default values, the default values should only be specified in the forward declaration, and should not be repeated in subfunc_def.
Declaring a sub-function with the same name as a stored function will hide that stored function from your module. It will not be possible to call that stored function. Contrary to |
Examples of Sub-Functions
-
Sub-function within a stored function
CREATE OR ALTER FUNCTION FUNC1 (n1 INTEGER, n2 INTEGER) RETURNS INTEGER AS - Subfunction DECLARE FUNCTION SUBFUNC (n1 INTEGER, n2 INTEGER) RETURNS INTEGER AS BEGIN RETURN n1 + n2; END BEGIN RETURN SUBFUNC (n1, n2); END
-
Recursive function call
execute block returns (i integer, o integer) as -- Recursive function without forward declaration. declare function fibonacci(n integer) returns integer as begin if (n = 0 or n = 1) then return n; else return fibonacci(n - 1) + fibonacci(n - 2); end begin i = 0; while (i < 10) do begin o = fibonacci(i); suspend; i = i + 1; end end
7.7.6. DECLARE PROCEDURE
Declares a sub-procedure
<subproc-forward> ::= <subproc-header>;
<subproc-def> ::= <subproc-header> <psql-module-body>
<subproc-header> ::=
DECLARE subprocname [ ( [ <in_params> ] ) ]
[RETURNS (<out_params>)]
<in_params> ::=
!! See CREATE PROCEDURE
Syntax !!
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
<psql-module-body> ::=
!! See Syntax of Module Body !!
Argument | Description |
---|---|
subprocname |
Sub-procedure name |
collation |
Collation name |
The DECLARE PROCEDURE
statement declares a sub-procedure.
A sub-procedure is only visible to the PSQL module that defined the sub-procedure.
A sub-procedure can use variables, but not cursors, from its parent module. It can access other routines from its parent modules.
Sub-procedures have a number of restrictions:
-
A sub-procedure cannot be nested in another subroutine. Subroutines are only supported in top-level PSQL modules (stored procedures, stored functions, triggers and PSQL blocks). This restriction is not enforced by the syntax, but attempts to create nested sub-procedures will raise an error “feature is not supported” with detail message “nested sub procedure”.
-
Currently, the sub-procedure has no direct access to use cursors from its parent module.
A sub-procedure can be forward declared to resolve mutual dependencies between subroutines, and must be followed by its actual definition. When a sub-procedure is forward declared and has parameters with default values, the default values should only be specified in the forward declaration, and should not be repeated in subproc_def.
Declaring a sub-procedure with the same name as a stored procedure, table or view will hide that stored procedure, table or view from your module. It will not be possible to call that stored procedure, table or view. Contrary to |
Examples of Sub-Procedures
-
Subroutines in
EXECUTE BLOCK
EXECUTE BLOCK RETURNS (name VARCHAR(63)) AS -- Sub-procedure returning a list of tables DECLARE PROCEDURE get_tables RETURNS (table_name VARCHAR(63)) AS BEGIN FOR SELECT RDB$RELATION_NAME FROM RDB$RELATIONS WHERE RDB$VIEW_BLR IS NULL INTO table_name DO SUSPEND; END -- Sub-procedure returning a list of views DECLARE PROCEDURE get_views RETURNS (view_name VARCHAR(63)) AS BEGIN FOR SELECT RDB$RELATION_NAME FROM RDB$RELATIONS WHERE RDB$VIEW_BLR IS NOT NULL INTO view_name DO SUSPEND; END BEGIN FOR SELECT table_name FROM get_tables UNION ALL SELECT view_name FROM get_views INTO name DO SUSPEND; END
-
With forward declaration and parameter with default value
execute block returns (o integer) as -- Forward declaration of P1. declare procedure p1(i integer = 1) returns (o integer); -- Forward declaration of P2. declare procedure p2(i integer) returns (o integer); -- Implementation of P1 should not re-declare parameter default value. declare procedure p1(i integer) returns (o integer) as begin execute procedure p2(i) returning_values o; end declare procedure p2(i integer) returns (o integer) as begin o = i; end begin execute procedure p1 returning_values o; suspend; end
7.7.7. BEGIN … END
Delimits a block of statements
<block> ::= BEGIN [<compound_statement> ...] [<when_do> ...] END <compound_statement> ::= {<block> | <statement>} <when_do> ::= !! See WHEN ... DO !!
The BEGIN … END
construct is a two-part statement that wraps a block of statements that are executed as one unit of code.
Each block starts with the keyword BEGIN
and ends with the keyword END
.
Blocks can be nested a maximum depth of 512 nested blocks.
A block can be empty, allowing them to act as stubs, without the need to write dummy statements.
For error handling, you can add one or more WHEN … DO
statements immediately before END
.
Other statements are not allowed after WHEN … DO
.
The BEGIN … END
itself should not be followed by a statement terminator (semicolon).
However, when defining or altering a PSQL module in the isql utility, that application requires that the last END
statement be followed by its own terminator character, that was previously switched — using SET TERM
— to a string other than a semicolon.
That terminator is not part of the PSQL syntax.
The final, or outermost, END
statement in a trigger terminates the trigger.
What the final END
statement does in a stored procedure depends on the type of procedure:
-
In a selectable procedure, the final
END
statement returns control to the caller, returning SQLCODE 100, indicating that there are no more rows to retrieve -
In an executable procedure, the final
END
statement returns control to the caller, along with the current values of any output parameters defined.
BEGIN … END
Examples
employee.fdb
database, showing simple usage of BEGIN … END
blocks:SET TERM ^;
CREATE OR ALTER PROCEDURE DEPT_BUDGET (
DNO CHAR(3))
RETURNS (
TOT DECIMAL(12,2))
AS
DECLARE VARIABLE SUMB DECIMAL(12,2);
DECLARE VARIABLE RDNO CHAR(3);
DECLARE VARIABLE CNT INTEGER;
BEGIN
TOT = 0;
SELECT BUDGET
FROM DEPARTMENT
WHERE DEPT_NO = :DNO
INTO :TOT;
SELECT COUNT(BUDGET)
FROM DEPARTMENT
WHERE HEAD_DEPT = :DNO
INTO :CNT;
IF (CNT = 0) THEN
SUSPEND;
FOR SELECT DEPT_NO
FROM DEPARTMENT
WHERE HEAD_DEPT = :DNO
INTO :RDNO
DO
BEGIN
EXECUTE PROCEDURE DEPT_BUDGET(:RDNO)
RETURNING_VALUES :SUMB;
TOT = TOT + SUMB;
END
SUSPEND;
END^
SET TERM ;^
7.7.8. IF … THEN … ELSE
Conditional branching
IF (<condition>) THEN <compound_statement> [ELSE <compound_statement>]
Argument | Description |
---|---|
condition |
A logical condition returning |
compound_statement |
A single statement, or statements wrapped in |
The conditional branch statement IF … THEN
is used to branch the execution process in a PSQL module.
The condition is always enclosed in parentheses.
If the condition returns the value TRUE
, execution branches to the statement or the block of statements after the keyword THEN
.
If an ELSE
is present, and the condition returns FALSE
or UNKNOWN
, execution branches to the statement or the block of statements after it.
IF
Examples
-
An example using the
IF
statement. Assume that the variablesFIRST
,LINE2
andLAST
were declared earlier.... IF (FIRST IS NOT NULL) THEN LINE2 = FIRST || ' ' || LAST; ELSE LINE2 = LAST; ...
-
Given
IF … THEN … ELSE
is a statement, it is possible to chain them together. Assume that theINT_VALUE
andSTRING_VALUE
variables were declared earlier.IF (INT_VALUE = 1) THEN STRING_VALUE = 'one'; ELSE IF (INT_VALUE = 2) THEN STRING_VALUE = 'two'; ELSE IF (INT_VALUE = 3) THEN STRING_VALUE = 'three'; ELSE STRING_VALUE = 'too much';
This specific example can be replaced with a simple
CASE
or theDECODE
function.
7.7.9. WHILE … DO
Looping construct
[label:] WHILE (<condition>) DO <compound_statement>
Argument | Description |
---|---|
label |
Optional label for |
condition |
A logical condition returning |
compound_statement |
A single statement, or statements wrapped in |
A WHILE
statement implements the looping construct in PSQL.
The statement or the block of statements will be executed as long as the condition returns TRUE
.
Loops can be nested to any depth.
WHILE … DO
Examples
A procedure calculating the sum of numbers from 1 to I
shows how the looping construct is used.
CREATE PROCEDURE SUM_INT (I INTEGER)
RETURNS (S INTEGER)
AS
BEGIN
s = 0;
WHILE (i > 0) DO
BEGIN
s = s + i;
i = i - 1;
END
END
Executing the procedure in isql:
EXECUTE PROCEDURE SUM_INT(4);
the result is:
S
==========
10
IF … THEN … ELSE
, BREAK
, LEAVE
, CONTINUE
, EXIT
, FOR SELECT
, FOR EXECUTE STATEMENT
7.7.10. BREAK
Exits a loop
[label:] <loop_stmt> BEGIN ... BREAK; ... END <loop_stmt> ::= FOR <select_stmt> INTO <var_list> DO | FOR EXECUTE STATEMENT ... INTO <var_list> DO | WHILE (<condition>)} DO
Argument | Description |
---|---|
label |
Label |
select_stmt |
|
condition |
A logical condition returning TRUE, FALSE or UNKNOWN |
The BREAK
statement immediately terminates the inner loop of a WHILE
or FOR
looping statement.
Code continues to be executed from the first statement after the terminated loop block.
BREAK
is similar to LEAVE
, except it doesn’t support a label.
7.7.11. LEAVE
Exits a loop
[label:] <loop_stmt> BEGIN ... LEAVE [label]; ... END <loop_stmt> ::= FOR <select_stmt> INTO <var_list> DO | FOR EXECUTE STATEMENT ... INTO <var_list> DO | WHILE (<condition>)} DO
Argument | Description |
---|---|
label |
Label |
select_stmt |
|
condition |
A logical condition returning |
The LEAVE
statement immediately terminates the inner loop of a WHILE
or FOR
looping statement.
Using the optional label parameter, LEAVE
can also exit an outer loop, that is, the loop labelled with label.
Code continues to be executed from the first statement after the terminated loop block.
LEAVE
Examples
-
Leaving a loop if an error occurs on an insert into the
NUMBERS
table. The code continues to be executed from the lineC = 0
.... WHILE (B < 10) DO BEGIN INSERT INTO NUMBERS(B) VALUES (:B); B = B + 1; WHEN ANY DO BEGIN EXECUTE PROCEDURE LOG_ERROR ( CURRENT_TIMESTAMP, 'ERROR IN B LOOP'); LEAVE; END END C = 0; ...
-
An example using labels in the
LEAVE
statement.LEAVE LOOPA
terminates the outer loop andLEAVE LOOPB
terminates the inner loop. Note that the plainLEAVE
statement would be enough to terminate the inner loop.... STMT1 = 'SELECT NAME FROM FARMS'; LOOPA: FOR EXECUTE STATEMENT :STMT1 INTO :FARM DO BEGIN STMT2 = 'SELECT NAME ' || 'FROM ANIMALS WHERE FARM = '''; LOOPB: FOR EXECUTE STATEMENT :STMT2 || :FARM || '''' INTO :ANIMAL DO BEGIN IF (ANIMAL = 'FLUFFY') THEN LEAVE LOOPB; ELSE IF (ANIMAL = FARM) THEN LEAVE LOOPA; SUSPEND; END END ...
7.7.12. CONTINUE
Continues with the next iteration of a loop
[label:] <loop_stmt> BEGIN ... CONTINUE [label]; ... END <loop_stmt> ::= FOR <select_stmt> INTO <var_list> DO | FOR EXECUTE STATEMENT ... INTO <var_list> DO | WHILE (<condition>)} DO
Argument | Description |
---|---|
label |
Label |
select_stmt |
|
condition |
A logical condition returning |
The CONTINUE
statement skips the remainder of the current block of a loop and starts the next iteration of the current WHILE
or FOR
loop.
Using the optional label parameter, CONTINUE
can also start the next iteration of an outer loop, that is, the loop labelled with label.
7.7.13. EXIT
Terminates execution of a module
EXIT;
The EXIT
statement causes execution of the current PSQL module to jump to the final END
statement from any point in the code, thus terminating the program.
Calling EXIT
in a function will result in the function returning NULL
.
7.7.14. SUSPEND
Passes output to the buffer and suspends execution while waiting for caller to fetch it
SUSPEND;
The SUSPEND
statement is used in selectable stored procedures to pass the values of output parameters to a buffer and suspend execution.
Execution remains suspended until the calling application fetches the contents of the buffer.
Execution resumes from the statement directly after the SUSPEND
statement.
In practice, this is likely to be a new iteration of a looping process.
Important Notes
|
7.7.15. EXECUTE STATEMENT
Executes dynamically created SQL statements
<execute_statement> ::= EXECUTE STATEMENT <argument>
[<option> ...]
[INTO <variables>];
<argument> ::= <paramless_stmt>
| (<paramless_stmt>)
| (<stmt_with_params>) (<param_values>)
<param_values> ::= <named_values> | <positional_values>
<named_values> ::= <named_value> [, <named_value> ...]
<named_value> ::= [EXCESS] paramname := <value_expr>
<positional_values> ::= <value_expr> [, <value_expr> ...]
<option> ::=
WITH {AUTONOMOUS | COMMON} TRANSACTION
| WITH CALLER PRIVILEGES
| AS USER user
| PASSWORD password
| ROLE role
| ON EXTERNAL [DATA SOURCE] <connection_string>
<connection_string> ::=
!! See <filespec> in the CREATE DATABASE
syntax !!
<variables> ::= [:]varname [, [:]varname ...]
Argument | Description |
---|---|
paramless_stmt |
Literal string or variable containing a non-parameterized SQL query |
stmt_with_params |
Literal string or variable containing a parameterized SQL query |
paramname |
SQL query parameter name |
value_expr |
SQL expression resolving to a value |
user |
Username.
It can be a string, |
password |
Password. It can be a string or a string variable |
role |
Role.
It can be a string, |
connection_string |
Connection string. It can be a string literal or a string variable |
varname |
Variable |
The statement EXECUTE STATEMENT
takes a string parameter and executes it as if it were a DSQL statement.
If the statement returns data, it can be passed to local variables by way of an INTO
clause.
EXECUTE STATEMENT
can only produce a single row of data.
Statements producing multiple rows of data must be executed with FOR EXECUTE STATEMENT
.
Parameterized Statements
You can use parameters — either named or positional — in the DSQL statement string. Each parameter must be assigned a value.
Special Rules for Parameterized Statements
-
Named and positional parameters cannot be mixed in one query
-
Each parameter must be used in the statement text.
To relax this rule, named parameters can be prefixed with the keyword
EXCESS
to indicate that the parameter may be absent from the statement text. This option is useful for dynamically generated statements that conditionally include or exclude certain parameters. -
If the statement has parameters, they must be enclosed in parentheses when
EXECUTE STATEMENT
is called, regardless of whether they come directly as strings, as variable names or as expressions -
Each named parameter must be prefixed by a colon (‘
:
’) in the statement string itself, but not when the parameter is assigned a value -
Positional parameters must be assigned their values in the same order as they appear in the query text
-
The assignment operator for parameters is the special operator “
:=
”, similar to the assignment operator in Pascal -
Each named parameter can be used in the statement more than once, but its value must be assigned only once
-
With positional parameters, the number of assigned values must match the number of parameter placeholders (question marks) in the statement exactly
-
A named parameter in the statement text can only be a regular identifier (it cannot be a quoted identifier)
Examples of EXECUTE STATEMENT
with parameters
-
With named parameters:
... DECLARE license_num VARCHAR(15); DECLARE connect_string VARCHAR (100); DECLARE stmt VARCHAR (100) = 'SELECT license ' 'FROM cars ' 'WHERE driver = :driver AND location = :loc'; BEGIN -- ... EXECUTE STATEMENT (stmt) (driver := current_driver, loc := current_location) ON EXTERNAL connect_string INTO license_num;
-
The same code with positional parameters:
DECLARE license_num VARCHAR (15); DECLARE connect_string VARCHAR (100); DECLARE stmt VARCHAR (100) = 'SELECT license ' 'FROM cars ' 'WHERE driver = ? AND location = ?'; BEGIN -- ... EXECUTE STATEMENT (stmt) (current_driver, current_location) ON EXTERNAL connect_string INTO license_num;
-
Use of
EXCESS
to allow named parameters to be unused (note: this is aFOR EXECUTE STATEMENT
):
CREATE PROCEDURE P_EXCESS (A_ID INT, A_TRAN INT = NULL, A_CONN INT = NULL)
RETURNS (ID INT, TRAN INT, CONN INT)
AS
DECLARE S VARCHAR(255) = 'SELECT * FROM TTT WHERE ID = :ID';
DECLARE W VARCHAR(255) = '';
BEGIN
IF (A_TRAN IS NOT NULL)
THEN W = W || ' AND TRAN = :a';
IF (A_CONN IS NOT NULL)
THEN W = W || ' AND CONN = :b';
IF (W <> '')
THEN S = S || W;
-- could raise error if TRAN or CONN is null
-- FOR EXECUTE STATEMENT (:S) (a := :A_TRAN, b := A_CONN, id := A_ID)
-- OK in all cases
FOR EXECUTE STATEMENT (:S) (EXCESS a := :A_TRAN, EXCESS b := A_CONN, id := A_ID)
INTO :ID, :TRAN, :CONN
DO SUSPEND;
END
WITH {AUTONOMOUS | COMMON} TRANSACTION
By default, the executed SQL statement runs within the current transaction.
Using WITH AUTONOMOUS TRANSACTION
causes a separate transaction to be started, with the same parameters as the current transaction.
This separate transaction will be committed when the statement was executed without errors and rolled back otherwise.
The clause WITH COMMON TRANSACTION
uses the current transaction whenever possible;
this is the default behaviour.
If the statement must run in a separate connection, an already started transaction within that connection is used, if available.
Otherwise, a new transaction is started with the same parameters as the current transaction.
Any new transactions started under the “COMMON
” regime are committed or rolled back with the current transaction.
WITH CALLER PRIVILEGES
By default, the SQL statement is executed with the privileges of the current user.
Specifying WITH CALLER PRIVILEGES
combines the privileges of the calling procedure or trigger with those of the user, as if the statement were executed directly by the routine.
WITH CALLER PRIVILEGES
has no effect if the ON EXTERNAL
clause is also present.
ON EXTERNAL [DATA SOURCE]
With ON EXTERNAL [DATA SOURCE]
, the SQL statement is executed in a separate connection to the same or another database, possibly even on another server.
If connection_string is NULL or “''
” (empty string), the entire ON EXTERNAL [DATA SOURCE]
clause is considered absent, and the statement is executed against the current database.
Connection Pooling
-
External connections made by statements
WITH COMMON TRANSACTION
(the default) will remain open until the current transaction ends. They can be reused by subsequent calls toEXECUTE STATEMENT
, but only if connection_string is identical, including case -
External connections made by statements
WITH AUTONOMOUS TRANSACTION
are closed as soon as the statement has been executed -
Statements using
WITH AUTONOMOUS TRANSACTION
can and will re-use connections that were opened earlier by statementsWITH COMMON TRANSACTION
. If this happens, the reused connection will be left open after the statement has been executed. (It must be, because it has at least one active transaction!)
Transaction Pooling
-
If
WITH COMMON TRANSACTION
is in effect, transactions will be reused as much as possible. They will be committed or rolled back together with the current transaction -
If
WITH AUTONOMOUS TRANSACTION
is specified, a fresh transaction will always be started for the statement. This transaction will be committed or rolled back immediately after the statement’s execution
Exception Handling
When ON EXTERNAL
is used, the extra connection is always made via a so-called external provider, even if the connection is to the current database.
One of the consequences is that exceptions cannot be caught in the usual way.
Every exception caused by the statement is wrapped in either an eds_connection
or an eds_statement
error.
To catch them in your PSQL code, you have to use WHEN GDSCODE eds_connection
, WHEN GDSCODE eds_statement
or WHEN ANY
.
Without ON EXTERNAL
, exceptions are caught in the usual way, even if an extra connection is made to the current database.
AS USER
, PASSWORD
and ROLE
The optional AS USER
, PASSWORD
and ROLE
clauses allow specification of which user will execute the SQL statement and with which role.
The method of user login, and whether a separate connection is opened, depends on the presence and values of the ON EXTERNAL [DATA SOURCE]
, AS USER
, PASSWORD
and ROLE
clauses:
-
If
ON EXTERNAL
is present, a new connection is always opened, and:-
If at least one of
AS USER
,PASSWORD
andROLE
is present, native authentication is attempted with the given parameter values (locally or remotely, depending on connection_string). No defaults are used for missing parameters -
If all three are absent, and connection_string contains no hostname, then the new connection is established on the local server with the same user and role as the current connection. The term 'local' means “on the same machine as the server” here. This is not necessarily the location of the client
-
If all three are absent, and connection_string contains a hostname, then trusted authentication is attempted on the remote host (again, 'remote' from the perspective of the server). If this succeeds, the remote operating system will provide the username (usually the operating system account under which the Firebird process runs)
-
-
If
ON EXTERNAL
is absent:-
If at least one of
AS USER
,PASSWORD
andROLE
is present, a new connection to the current database is opened with the supplied parameter values. No defaults are used for missing parameters -
If all three are absent, the statement is executed within the current connection
-
If a parameter value is NULL or “''
” (empty string), the entire parameter is considered absent.
Additionally, AS USER
is considered absent if its value is equal to CURRENT_USER
, and ROLE
if it is the same as CURRENT_ROLE
.
Caveats with EXECUTE STATEMENT
-
There is no way to validate the syntax of the enclosed statement
-
There are no dependency checks to discover whether tables or columns have been dropped
-
Execution is considerably slower than when the same statements are executed directly as PSQL code
-
Return values are strictly checked for data type to avoid unpredictable type-casting exceptions. For example, the string
'1234'
would convert to an integer, 1234, but'abc'
would give a conversion error
All in all, this feature is meant to be used cautiously, and you should always take the caveats into account. If you can achieve the same result with PSQL and/or DSQL, it will almost always be preferable.
7.7.16. FOR SELECT
Loops row-by-row through a query result set
[label:] FOR <select_stmt> [AS CURSOR cursor_name] DO <compound_statement>
Argument | Description |
---|---|
label |
Optional label for |
select_stmt |
|
cursor_name |
Cursor name. It must be unique among cursor names in the PSQL module (stored procedure, stored function, trigger or PSQL block) |
compound_statement |
A single statement, or statements wrapped in |
The FOR SELECT
statement
-
retrieves each row sequentially from the result set, and executes the statement or block of statements for each row. In each iteration of the loop, the field values of the current row are copied into pre-declared variables.
Including the
AS CURSOR
clause enables positioned deletes and updates to be performed — see notes below -
can embed other
FOR SELECT
statements -
can contain named parameters that must be previously declared in the
DECLARE VARIABLE
statement or exist as input or output parameters of the procedure -
requires an
INTO
clause at the end of theSELECT … FROM …
specification ifAS CURSOR
is absent In each iteration of the loop, the field values of the current row are copied to the list of variables specified in theINTO
clause. The loop repeats until all rows are retrieved, after which it terminates -
can be terminated before all rows are retrieved by using a
BREAK
,LEAVE
orEXIT
statement
The Undeclared Cursor
The optional AS CURSOR
clause surfaces the result set of the FOR SELECT
structure as an undeclared, named cursor that can be operated on using the WHERE CURRENT OF
clause inside the statement or block following the DO
command, to delete or update the current row before execution moves to the next row.
In addition, it is possible to use the cursor name as a record variable (similar to OLD
and NEW
in triggers), allowing access to the columns of the result set (i.e. cursor_name.columnname).
-
When accessing a cursor variable in a DML statement, the colon prefix can be added before the cursor name (i.e.
:cursor_name.columnname
) for disambiguation, similar to variables.The cursor variable can be referenced without colon prefix, but in that case, depending on the scope of the contexts in the statement, the name may resolve in the statement context instead of to the cursor (e.g. you select from a table with the same name as the cursor).
-
Cursor variables are read-only
-
In a
FOR SELECT
statement without anAS CURSOR
clause, you must use theINTO
clause. If anAS CURSOR
clause is specified, theINTO
clause is allowed, but optional; you can access the fields through the cursor instead. -
Reading from a cursor variable returns the current field values. This means that an
UPDATE
statement (with aWHERE CURRENT OF
clause) will update not only the table, but also the fields in the cursor variable for subsequent reads. Executing aDELETE
statement (with aWHERE CURRENT OF
clause) will set all fields in the cursor variable toNULL
for subsequent reads
Other points to take into account regarding undeclared cursors:
-
The
OPEN
,FETCH
andCLOSE
statements cannot be applied to a cursor surfaced by theAS CURSOR
clause -
The cursor_name argument associated with an
AS CURSOR
clause must not clash with any names created byDECLARE VARIABLE
orDECLARE CURSOR
statements at the top of the module body, nor with any other cursors surfaced by anAS CURSOR
clause -
The optional
FOR UPDATE
clause in theSELECT
statement is not required for a positioned update
Examples using FOR SELECT
-
A simple loop through query results:
CREATE PROCEDURE SHOWNUMS RETURNS ( AA INTEGER, BB INTEGER, SM INTEGER, DF INTEGER) AS BEGIN FOR SELECT DISTINCT A, B FROM NUMBERS ORDER BY A, B INTO AA, BB DO BEGIN SM = AA + BB; DF = AA - BB; SUSPEND; END END
-
Nested
FOR SELECT
loop:CREATE PROCEDURE RELFIELDS RETURNS ( RELATION CHAR(32), POS INTEGER, FIELD CHAR(32)) AS BEGIN FOR SELECT RDB$RELATION_NAME FROM RDB$RELATIONS ORDER BY 1 INTO :RELATION DO BEGIN FOR SELECT RDB$FIELD_POSITION + 1, RDB$FIELD_NAME FROM RDB$RELATION_FIELDS WHERE RDB$RELATION_NAME = :RELATION ORDER BY RDB$FIELD_POSITION INTO :POS, :FIELD DO BEGIN IF (POS = 2) THEN RELATION = ' "'; SUSPEND; END END END
Instead of nesting statements, this is generally better solved by using a single statements with a join. -
Using the
AS CURSOR
clause to surface a cursor for the positioned delete of a record:CREATE PROCEDURE DELTOWN ( TOWNTODELETE VARCHAR(24)) RETURNS ( TOWN VARCHAR(24), POP INTEGER) AS BEGIN FOR SELECT TOWN, POP FROM TOWNS INTO :TOWN, :POP AS CURSOR TCUR DO BEGIN IF (:TOWN = :TOWNTODELETE) THEN -- Positional delete DELETE FROM TOWNS WHERE CURRENT OF TCUR; ELSE SUSPEND; END END
-
Using an implicitly declared cursor as a cursor variable
EXECUTE BLOCK RETURNS (o CHAR(63)) AS BEGIN FOR SELECT rdb$relation_name AS name FROM rdb$relations AS CURSOR c DO BEGIN o = c.name; SUSPEND; END END
-
Disambiguating cursor variables within queries
EXECUTE BLOCK RETURNS (o1 CHAR(63), o2 CHAR(63)) AS BEGIN FOR SELECT rdb$relation_name FROM rdb$relations WHERE rdb$relation_name = 'RDB$RELATIONS' AS CURSOR c DO BEGIN FOR SELECT -- with a prefix resolves to the cursor :c.rdb$relation_name x1, -- no prefix as an alias for the rdb$relations table c.rdb$relation_name x2 FROM rdb$relations c WHERE rdb$relation_name = 'RDB$DATABASE' AS CURSOR d DO BEGIN o1 = d.x1; o2 = d.x2; SUSPEND; END END END
7.7.17. FOR EXECUTE STATEMENT
Executes dynamically created SQL statements and loops over its result set
[label:] FOR <execute_statement> DO <compound_statement>
Argument | Description |
---|---|
label |
Optional label for |
execute_stmt |
An |
compound_statement |
A single statement, or statements wrapped in |
The statement FOR EXECUTE STATEMENT
is used, in a manner analogous to FOR SELECT
, to loop through the result set of a dynamically executed query that returns multiple rows.
FOR EXECUTE STATEMENT
Examples
SELECT
query that returns a data setCREATE PROCEDURE DynamicSampleThree (
Q_FIELD_NAME VARCHAR(100),
Q_TABLE_NAME VARCHAR(100)
) RETURNS(
LINE VARCHAR(32000)
)
AS
DECLARE VARIABLE P_ONE_LINE VARCHAR(100);
BEGIN
LINE = '';
FOR
EXECUTE STATEMENT
'SELECT T1.' || :Q_FIELD_NAME ||
' FROM ' || :Q_TABLE_NAME || ' T1 '
INTO :P_ONE_LINE
DO
IF (:P_ONE_LINE IS NOT NULL) THEN
LINE = :LINE || :P_ONE_LINE || ' ';
SUSPEND;
END
7.7.18. OPEN
Opens a declared cursor
OPEN cursor_name;
Argument | Description |
---|---|
cursor_name |
Cursor name.
A cursor with this name must be previously declared with a |
An OPEN
statement opens a previously declared cursor, executes its declared SELECT
statement, and makes the first record of the result data set ready to fetch.
OPEN
can be applied only to cursors previously declared in a DECLARE .. CURSOR
statement.
If the SELECT
statement of the cursor has parameters, they must be declared as local variables, or input or output parameters before the cursor is declared.
When the cursor is opened, the parameter is assigned the current value of the variable.
OPEN
Examples
-
Using the
OPEN
statement:SET TERM ^; CREATE OR ALTER PROCEDURE GET_RELATIONS_NAMES RETURNS ( RNAME CHAR(63) ) AS DECLARE C CURSOR FOR ( SELECT RDB$RELATION_NAME FROM RDB$RELATIONS); BEGIN OPEN C; WHILE (1 = 1) DO BEGIN FETCH C INTO :RNAME; IF (ROW_COUNT = 0) THEN LEAVE; SUSPEND; END CLOSE C; END^ SET TERM ;^
-
A collection of scripts for creating views using a PSQL block with named cursors:
EXECUTE BLOCK RETURNS ( SCRIPT BLOB SUB_TYPE TEXT) AS DECLARE VARIABLE FIELDS VARCHAR(8191); DECLARE VARIABLE FIELD_NAME TYPE OF RDB$FIELD_NAME; DECLARE VARIABLE RELATION RDB$RELATION_NAME; DECLARE VARIABLE SOURCE TYPE OF COLUMN RDB$RELATIONS.RDB$VIEW_SOURCE; -- named cursor DECLARE VARIABLE CUR_R CURSOR FOR ( SELECT RDB$RELATION_NAME, RDB$VIEW_SOURCE FROM RDB$RELATIONS WHERE RDB$VIEW_SOURCE IS NOT NULL); -- named cursor with local variable DECLARE CUR_F CURSOR FOR ( SELECT RDB$FIELD_NAME FROM RDB$RELATION_FIELDS WHERE -- Important! The variable has to be declared earlier RDB$RELATION_NAME = :RELATION); BEGIN OPEN CUR_R; WHILE (1 = 1) DO BEGIN FETCH CUR_R INTO :RELATION, :SOURCE; IF (ROW_COUNT = 0) THEN LEAVE; FIELDS = NULL; -- The CUR_F cursor will use -- variable value of RELATION initialized above OPEN CUR_F; WHILE (1 = 1) DO BEGIN FETCH CUR_F INTO :FIELD_NAME; IF (ROW_COUNT = 0) THEN LEAVE; IF (FIELDS IS NULL) THEN FIELDS = TRIM(FIELD_NAME); ELSE FIELDS = FIELDS || ', ' || TRIM(FIELD_NAME); END CLOSE CUR_F; SCRIPT = 'CREATE VIEW ' || RELATION; IF (FIELDS IS NOT NULL) THEN SCRIPT = SCRIPT || ' (' || FIELDS || ')'; SCRIPT = SCRIPT || ' AS ' || ASCII_CHAR(13); SCRIPT = SCRIPT || SOURCE; SUSPEND; END CLOSE CUR_R; END
7.7.19. FETCH
Fetches a record from a cursor
FETCH [<fetch_scroll> FROM] cursor_name [INTO [:]varname [, [:]varname ...]]; <fetch_scroll> ::= NEXT | PRIOR | FIRST | LAST | RELATIVE n | ABSOLUTE n
Argument | Description |
---|---|
cursor_name |
Cursor name.
A cursor with this name must be previously declared with a |
varname |
Variable name |
n |
Integer expression for the number of rows |
The FETCH
statement fetches the next row from the result set of the cursor and assigns the column values to PSQL variables.
The FETCH
statement can be used only with a cursor declared with the DECLARE .. CURSOR
statement.
Using the optional fetch_scroll part of the FETCH
statement, you can specify in which direction and how many rows to advance the cursor position.
The NEXT
fetch option can be used for scrollable and forward-only cursors.
Other fetch options are only supported for scrollable cursors.
NEXT
-
moves the cursor one row forward; this is the default
PRIOR
-
moves the cursor one record back
FIRST
-
moves the cursor to the first record.
LAST
-
moves the cursor to the last record
RELATIVE n
-
moves the cursor n rows from the current position; positive numbers move forward, negative numbers move backwards; using zero (
0
) will not move the cursor, andROW_COUNT
will be set to zero as no new row was fetched. ABSOLUTE n
-
moves the cursor to the specified row; n is an integer expression, where
1
indicates the first row. For negative values, the absolute position is taken from the end of the result set, so-1
indicates the last row,-2
the second to last row, etc. A value of zero (0
) will position before the first row.
The optional INTO
clause gets data from the current row of the cursor and loads them into PSQL variables.
If a fetch moves beyond the bounds of the result set, the variables will be set to NULL
.
It is also possible to use the cursor name as a variable of a record type (similar to OLD
and NEW
in triggers), allowing access to the columns of the result set (i.e. cursor_name.columnname).
-
When accessing a cursor variable in a DML statement, the colon prefix can be added before the cursor name (i.e.
:cursor_name.columnname
) for disambiguation, similar to variables.The cursor variable can be referenced without colon prefix, but in that case, depending on the scope of the contexts in the statement, the name may resolve in the statement context instead of to the cursor (e.g. you select from a table with the same name as the cursor).
-
Cursor variables are read-only
-
In a
FOR SELECT
statement without anAS CURSOR
clause, you must use theINTO
clause. If anAS CURSOR
clause is specified, theINTO
clause is allowed, but optional; you can access the fields through the cursor instead. -
Reading from a cursor variable returns the current field values. This means that an
UPDATE
statement (with aWHERE CURRENT OF
clause) will update not only the table, but also the fields in the cursor variable for subsequent reads. Executing aDELETE
statement (with aWHERE CURRENT OF
clause) will set all fields in the cursor variable toNULL
for subsequent reads -
When the cursor is not positioned on a row — it is positioned before the first row, or after the last row — attempts to read from the cursor variable will result in error “Cursor cursor_name is not positioned in a valid record”
For checking whether all the rows of the result set have been fetched, the context variable ROW_COUNT
returns the number of rows fetched by the statement.
If a record was fetched, then ROW_COUNT
is one (1
), otherwise zero (0
).
FETCH
Examples
-
Using the
FETCH
statement:CREATE OR ALTER PROCEDURE GET_RELATIONS_NAMES RETURNS (RNAME CHAR(63)) AS DECLARE C CURSOR FOR ( SELECT RDB$RELATION_NAME FROM RDB$RELATIONS); BEGIN OPEN C; WHILE (1 = 1) DO BEGIN FETCH C INTO RNAME; IF (ROW_COUNT = 0) THEN LEAVE; SUSPEND; END CLOSE C; END
-
Using the
FETCH
statement with nested cursors:EXECUTE BLOCK RETURNS (SCRIPT BLOB SUB_TYPE TEXT) AS DECLARE VARIABLE FIELDS VARCHAR (8191); DECLARE VARIABLE FIELD_NAME TYPE OF RDB$FIELD_NAME; DECLARE VARIABLE RELATION RDB$RELATION_NAME; DECLARE VARIABLE SRC TYPE OF COLUMN RDB$RELATIONS.RDB$VIEW_SOURCE; -- Named cursor declaration DECLARE VARIABLE CUR_R CURSOR FOR ( SELECT RDB$RELATION_NAME, RDB$VIEW_SOURCE FROM RDB$RELATIONS WHERE RDB$VIEW_SOURCE IS NOT NULL); -- Declaring a named cursor in which -- a local variable is used DECLARE CUR_F CURSOR FOR ( SELECT RDB$FIELD_NAME FROM RDB$RELATION_FIELDS WHERE -- the variable must be declared earlier RDB$RELATION_NAME =: RELATION); BEGIN OPEN CUR_R; WHILE (1 = 1) DO BEGIN FETCH CUR_R INTO RELATION, SRC; IF (ROW_COUNT = 0) THEN LEAVE; FIELDS = NULL; -- Cursor CUR_F will use the value -- the RELATION variable initialized above OPEN CUR_F; WHILE (1 = 1) DO BEGIN FETCH CUR_F INTO FIELD_NAME; IF (ROW_COUNT = 0) THEN LEAVE; IF (FIELDS IS NULL) THEN FIELDS = TRIM (FIELD_NAME); ELSE FIELDS = FIELDS || ',' || TRIM(FIELD_NAME); END CLOSE CUR_F; SCRIPT = 'CREATE VIEW' || RELATION; IF (FIELDS IS NOT NULL) THEN SCRIPT = SCRIPT || '(' || FIELDS || ')' ; SCRIPT = SCRIPT || 'AS' || ASCII_CHAR (13); SCRIPT = SCRIPT || SRC; SUSPEND; END CLOSE CUR_R; EN
-
An example of using the
FETCH
statement with a scrollable cursor
EXECUTE BLOCK RETURNS (N INT, RNAME CHAR (63)) AS DECLARE C SCROLL CURSOR FOR ( SELECT ROW_NUMBER() OVER (ORDER BY RDB$RELATION_NAME) AS N, RDB$RELATION_NAME FROM RDB$RELATIONS ORDER BY RDB$RELATION_NAME); BEGIN OPEN C; -- move to the first record (N = 1) FETCH FIRST FROM C; RNAME = C.RDB$RELATION_NAME; N = C.N; SUSPEND; -- move 1 record forward (N = 2) FETCH NEXT FROM C; RNAME = C.RDB$RELATION_NAME; N = C.N; SUSPEND; -- move to the fifth record (N = 5) FETCH ABSOLUTE 5 FROM C; RNAME = C.RDB$RELATION_NAME; N = C.N; SUSPEND; -- move 1 record backward (N = 4) FETCH PRIOR FROM C; RNAME = C.RDB$RELATION_NAME; N = C.N; SUSPEND; -- move 3 records forward (N = 7) FETCH RELATIVE 3 FROM C; RNAME = C.RDB$RELATION_NAME; N = C.N; SUSPEND; -- move back 5 records (N = 2) FETCH RELATIVE -5 FROM C; RNAME = C.RDB$RELATION_NAME; N = C.N; SUSPEND; -- move to the first record (N = 1) FETCH FIRST FROM C; RNAME = C.RDB$RELATION_NAME; N = C.N; SUSPEND; -- move to the last entry FETCH LAST FROM C; RNAME = C.RDB$RELATION_NAME; N = C.N; SUSPEND; CLOSE C; END
7.7.20. CLOSE
Closes a declared cursor
CLOSE cursor_name;
Argument | Description |
---|---|
cursor_name |
Cursor name.
A cursor with this name must be previously declared with a |
A CLOSE
statement closes an open cursor.
Only a cursor that was declared with DECLARE .. CURSOR
can be closed with a CLOSE
statement.
Any cursors that are still open will be automatically closed after the module code completes execution.
7.7.21. IN AUTONOMOUS TRANSACTION
Executes a statement or a block of statements in an autonomous transaction
IN AUTONOMOUS TRANSACTION DO <compound_statement>
Argument | Description |
---|---|
compound_statement |
A single statement, or statements wrapped in |
The IN AUTONOMOUS TRANSACTION
statement enables execution of a statement or a block of statements in an autonomous transaction.
Code running in an autonomous transaction will be committed right after its successful execution, regardless of the status of its parent transaction.
This can be used when certain operations must not be rolled back, even if an error occurs in the parent transaction.
An autonomous transaction has the same isolation level as its parent transaction. Any exception that is thrown in the block of the autonomous transaction code will result in the autonomous transaction being rolled back and all changes made will be undone. If the code executes successfully, the autonomous transaction will be committed.
IN AUTONOMOUS TRANSACTION
Examples
Using an autonomous transaction in a trigger for the database ON CONNECT
event, to log all connection attempts, including those that failed:
CREATE TRIGGER TR_CONNECT ON CONNECT
AS
BEGIN
-- Logging all attempts to connect to the database
IN AUTONOMOUS TRANSACTION DO
INSERT INTO LOG(MSG)
VALUES ('USER ' || CURRENT_USER || ' CONNECTS.');
IF (EXISTS(SELECT *
FROM BLOCKED_USERS
WHERE USERNAME = CURRENT_USER)) THEN
BEGIN
-- Logging that the attempt to connect
-- to the database failed and sending
-- a message about the event
IN AUTONOMOUS TRANSACTION DO
BEGIN
INSERT INTO LOG(MSG)
VALUES ('USER ' || CURRENT_USER || ' REFUSED.');
POST_EVENT 'CONNECTION ATTEMPT BY BLOCKED USER!';
END
-- now calling an exception
EXCEPTION EX_BADUSER;
END
END
7.7.22. POST_EVENT
Posts an event for notification to registered clients on commit
POST_EVENT event_name;
Argument | Description |
---|---|
event_name |
Event name (message) limited to 127 bytes |
The POST_EVENT
statement notifies the event manager about the event, which saves it to an event table.
When the transaction is committed, the event manager notifies applications that have registered their interest in the event.
The event name can be a code, or a short message: the choice is open as it is a string of up to 127 bytes. Keep in mind that the application listening for an event must use the exact event name when registering.
The content of the string can be a string literal, a variable or any valid SQL expression that resolves to a string.
7.7.23. RETURN
Returns a value from a stored function
RETURN value;
Argument | Description |
---|---|
value |
Expression with the value to return; Can be any expression type-compatible with the return type of the function |
The RETURN
statement ends the execution of a function and returns the value of the expression value.
RETURN
can only be used in PSQL functions (stored functions and local sub-functions).
7.8. Trapping and Handling Errors
Firebird has a useful lexicon of PSQL statements and resources for trapping errors in modules and for handling them. Firebird uses built-in exceptions that are raised for errors occurring when working DML and DDL statements.
In PSQL code, exceptions are handled by means of the WHEN … DO
statement.
Handling an exception in the code involves either fixing the problem in situ, or stepping past it;
either solution allows execution to continue without returning an exception message to the client.
An exception results in execution being terminated in the current block.
Instead of passing the execution to the END
statement, the procedure moves outward through levels of nested blocks, starting from the block where the exception is caught, searching for the code of the handler that “knows” about this exception.
It stops searching when it finds the first WHEN
statement that can handle this exception.
7.8.1. System Exceptions
An exception is a message that is generated when an error occurs.
All exceptions handled by Firebird have predefined numeric values for context variables (symbols) and text messages associated with them. Error messages are output in English by default. Localized Firebird builds are available, where error messages are translated into other languages.
Complete listings of the system exceptions can be found in Appendix B, Exception Codes and Messages:
7.8.2. Custom Exceptions
Custom exceptions can be declared in the database as persistent objects and called in PSQL code to signal specific errors;
for example, to enforce certain business rules.
A custom exception consists of an identifier, and a default message of 1021 bytes.
For details, see CREATE EXCEPTION
.
7.8.3. EXCEPTION
Throws a user-defined exception or rethrows an exception
EXCEPTION [ exception_name [ custom_message | USING (<value_list>)] ] <value_list> ::= <val> [, <val> ...]
Argument | Description |
---|---|
exception_name |
Exception name |
custom_message |
Alternative message text to be returned to the caller interface when an exception is thrown. Maximum length of the text message is 1,021 bytes |
val |
Value expression that replaces parameter slots in the exception message text |
The EXCEPTION
statement with exception_name throws the user-defined exception with the specified name.
An alternative message text of up to 1,021 bytes can optionally override the exception’s default message text.
The default exception message can contain slots for parameters that can be filled when throwing an exception.
To pass parameter values to an exception, use the USING
clause.
Considering, in left-to-right order, each parameter passed in the exception-raising statement as “the Nth”, with N starting at 1:
-
If the Nth parameter is not passed, its slot is not replaced
-
If a
NULL
parameter is passed, the slot will be replaced with the string “*** null ***
” -
If more parameters are passed than are defined in the exception message, the surplus ones are ignored
-
The maximum number of parameters is 9
-
The maximum message length, including parameter values, is 1053 bytes
The status vector is generated as the code combination isc_except, <exception number>, isc_formatted_exception, <formatted exception message>, <exception parameters>
.
The error code used (isc_formatted_exception
) was introduced in Firebird 3.0, so the client must be at least version 3.0, or at least use the firebird.msg
from version 3.0 or higher, to translate the status vector to a string.
If the message contains a parameter slot number that is greater than 9, the second and subsequent digits will be treated as literal text.
For example As an example:
This will produce the following output Statement failed, SQLSTATE = HY000 exception 1 -EX1 -something wrong in abcdefghia0a1 |
Exceptions can be handled in a WHEN … DO
statement.
If an exception is not handled in a module, then the effects of the actions executed inside this module are cancelled, and the caller program receives the exception (either the default text, or the custom text).
Within the exception-handling block — and only within it — the caught exception can be re-thrown by executing the EXCEPTION
statement without parameters.
If located outside the block, the re-thrown EXCEPTION
call has no effect.
Custom exceptions are stored in the system table RDB$EXCEPTIONS
.
EXCEPTION
Examples
-
Throwing an exception upon a condition in the
SHIP_ORDER
stored procedure:CREATE OR ALTER PROCEDURE SHIP_ORDER ( PO_NUM CHAR(8)) AS DECLARE VARIABLE ord_stat CHAR(7); DECLARE VARIABLE hold_stat CHAR(1); DECLARE VARIABLE cust_no INTEGER; DECLARE VARIABLE any_po CHAR(8); BEGIN SELECT s.order_status, c.on_hold, c.cust_no FROM sales s, customer c WHERE po_number = :po_num AND s.cust_no = c.cust_no INTO :ord_stat, :hold_stat, :cust_no; IF (ord_stat = 'shipped') THEN EXCEPTION order_already_shipped; /* Other statements */ END
-
Throwing an exception upon a condition and replacing the original message with an alternative message:
CREATE OR ALTER PROCEDURE SHIP_ORDER ( PO_NUM CHAR(8)) AS DECLARE VARIABLE ord_stat CHAR(7); DECLARE VARIABLE hold_stat CHAR(1); DECLARE VARIABLE cust_no INTEGER; DECLARE VARIABLE any_po CHAR(8); BEGIN SELECT s.order_status, c.on_hold, c.cust_no FROM sales s, customer c WHERE po_number = :po_num AND s.cust_no = c.cust_no INTO :ord_stat, :hold_stat, :cust_no; IF (ord_stat = 'shipped') THEN EXCEPTION order_already_shipped 'Order status is "' || ord_stat || '"'; /* Other statements */ END
-
Using a parameterized exception:
CREATE EXCEPTION EX_BAD_SP_NAME 'Name of procedures must start with' '@ 1' ':' '@ 2' '' ; ... CREATE TRIGGER TRG_SP_CREATE BEFORE CREATE PROCEDURE AS DECLARE SP_NAME VARCHAR(255); BEGIN SP_NAME = RDB$GET_CONTEXT ('DDL_TRIGGER' , 'OBJECT_NAME'); IF (SP_NAME NOT STARTING 'SP_') THEN EXCEPTION EX_BAD_SP_NAME USING ('SP_', SP_NAME); END
-
Logging an error and re-throwing it in the
WHEN
block:CREATE PROCEDURE ADD_COUNTRY ( ACountryName COUNTRYNAME, ACurrency VARCHAR(10)) AS BEGIN INSERT INTO country (country, currency) VALUES (:ACountryName, :ACurrency); WHEN ANY DO BEGIN -- write an error in log IN AUTONOMOUS TRANSACTION DO INSERT INTO ERROR_LOG (PSQL_MODULE, GDS_CODE, SQL_CODE, SQL_STATE) VALUES ('ADD_COUNTRY', GDSCODE, SQLCODE, SQLSTATE); -- Re-throw exception EXCEPTION; END END
7.8.4. WHEN … DO
Catches an exception for error handling
<block> ::= BEGIN [<compound_statement> ...] [<when_do> ...] END <compound_statement> ::= {<block> | <statement>} <<when_do>> ::= WHEN {<error> [, <error> ...] | ANY} DO <compound_statement> <error> ::= { EXCEPTION exception_name | SQLCODE number | GDSCODE errcode | SQLSTATE sqlstate_code }
Argument | Description |
---|---|
exception_name |
Exception name |
number |
SQLCODE error code |
errcode |
Symbolic GDSCODE error name |
sqlstate_code |
String literal with the SQLSTATE error code |
compound_statement |
A single statement, or a block of statements |
The WHEN … DO
statement handles Firebird errors and user-defined exceptions.
The statement catches all errors and user-defined exceptions listed after the keyword WHEN
keyword.
If WHEN
is followed by the keyword ANY
, the statement catches any error or user-defined exception, even if they have already been handled in a WHEN
block located higher up.
The WHEN … DO
statements must be located at the end of a block of statements, before the block’s END
keyword, and after any other statement.
The keyword DO
is followed by a single statement, or statements wrapped in a BEGIN … END
block, that handles the exception.
The SQLCODE
, GDSCODE
, and SQLSTATE
context variables are available in the context of this statement or block.
Use the RDB$ERROR
function to obtain the SQLCODE, GDSCODE, SQLSTATE, custom exception name and exception message.
The EXCEPTION
statement, without parameters, can also be used in this context to re-throw the error or exception.
The WHEN … DO
statement or block is only executed when one of the events targeted by its conditions occurs at run-time.
If the WHEN … DO
statement is executed, even if it does nothing, execution will continue as if no error occurred: the error or user-defined exception neither terminates nor rolls back the operations of the trigger or stored procedure.
However, if the WHEN … DO
statement or block does nothing to handle or resolve the error, the DML statement (SELECT
, INSERT
, UPDATE
, DELETE
, MERGE
) that caused the error will be rolled back and none of the statements below it in the same block of statements are executed.
|
Scope of a WHEN … DO
Statement
A WHEN … DO
statement catches errors and exceptions in the current block of statements.
It also catches exceptions from nested blocks, if those exceptions have not been handled in those blocks.
All changes made before the statement that caused the error are visible to a WHEN … DO
statement.
However, if you try to log them in an autonomous transaction, those changes are unavailable, because the transaction where the changes took place is not committed at the point when the autonomous transaction is started.
Example 4, below, demonstrates this behaviour.
When handling exceptions, it is sometimes desirable to handle the exception by writing a log message to mark the fault and having execution continue past the faulty record. Logs can be written to regular tables, but there is a problem with that: the log records will “disappear” if an unhandled error causes the module to stop executing, and a rollback is performed. Use of external tables can be useful here, as data written to them is transaction-independent. The date inserted into a linked external file will still be there, regardless of whether the overall process succeeds or not. |
Examples using WHEN…DO
-
Replacing the standard error with a custom one:
CREATE EXCEPTION COUNTRY_EXIST ''; SET TERM ^; CREATE PROCEDURE ADD_COUNTRY ( ACountryName COUNTRYNAME, ACurrency VARCHAR(10) ) AS BEGIN INSERT INTO country (country, currency) VALUES (:ACountryName, :ACurrency); WHEN SQLCODE -803 DO EXCEPTION COUNTRY_EXIST 'Country already exists!'; END^ SET TERM ^;
-
Logging an error and re-throwing it in the
WHEN
block:CREATE PROCEDURE ADD_COUNTRY ( ACountryName COUNTRYNAME, ACurrency VARCHAR(10) ) AS BEGIN INSERT INTO country (country, currency) VALUES (:ACountryName, :ACurrency); WHEN ANY DO BEGIN -- write an error in log IN AUTONOMOUS TRANSACTION DO INSERT INTO ERROR_LOG (PSQL_MODULE, GDS_CODE, SQL_CODE, SQL_STATE, MESSAGE) VALUES ('ADD_COUNTRY', GDSCODE, SQLCODE, SQLSTATE, RDB$ERROR(MESSAGE)); -- Re-throw exception EXCEPTION; END END
-
Handling several errors in one
WHEN
block... WHEN GDSCODE GRANT_OBJ_NOTFOUND, GDSCODE GRANT_FLD_NOTFOUND, GDSCODE GRANT_NOPRIV, GDSCODE GRANT_NOPRIV_ON_BASE DO BEGIN EXECUTE PROCEDURE LOG_GRANT_ERROR(GDSCODE, RDB$ERROR(MESSAGE); EXIT; END ...
-
Catching errors using the SQLSTATE code
EXECUTE BLOCK AS DECLARE VARIABLE I INT; BEGIN BEGIN I = 1/0; WHEN SQLSTATE '22003' DO EXCEPTION E_CUSTOM_EXCEPTION 'Numeric value out of range.'; WHEN SQLSTATE '22012' DO EXCEPTION E_CUSTOM_EXCEPTION 'Division by zero.'; WHEN SQLSTATE '23000' DO EXCEPTION E_CUSTOM_EXCEPTION 'Integrity constraint violation.'; END END
8. Built-in Scalar Functions
Unless explicitly mentioned otherwise in an “Available in” section, functions are available in DSQL and PSQL. Availability of built-in functions in ESQL is not tracked by this Language Reference.
8.1. Context Functions
8.1.1. RDB$GET_CONTEXT()
Retrieves the value of a context variable from a namespace
VARCHAR(255)
RDB$GET_CONTEXT ('<namespace>', <varname>) <namespace> ::= SYSTEM | USER_SESSION | USER_TRANSACTION | DDL_TRIGGER <varname> ::= A case-sensitive quoted string of max. 80 characters
Parameter | Description |
---|---|
namespace |
Namespace |
varname |
Variable name; case-sensitive with a maximum length of 80 characters |
The USER_SESSION
and USER_TRANSACTION
namespaces are initially empty.
A user can create and set variables with RDB$SET_CONTEXT()
and retrieve them with RDB$GET_CONTEXT()
.
The SYSTEM
namespace is read-only.
The DDL_TRIGGER
namespace is only valid in DDL triggers, and is read-only.
The SYSTEM
and DDL_TRIGGER
namespaces contain a number of predefined variables, shown below.
If the polled variable exists in the given namespace, its value will be returned as a string of max. 255 characters.
If the namespace doesn’t exist or if you try to access a non-existing variable in the SYSTEM
or DDL_TRIGGER
namespace, an error is raised.
If you request a non-existing variable in one of the user namespaces, NULL
is returned.
Both namespace and variable names must be given as single-quoted, case-sensitive, non-NULL
strings.
The SYSTEM
Namespace
CLIENT_ADDRESS
-
For TCP, this is the IP address. For XNET, the local process ID. For all other protocols this variable is
NULL
. CLIENT_HOST
-
The wire protocol host name of remote client. Value is returned for all supported protocols.
CLIENT_PID
-
Process ID of remote client application.
CLIENT_PROCESS
-
Process name of remote client application.
CURRENT_ROLE
-
Same as global
CURRENT_ROLE
variable. CURRENT_USER
-
Same as global
CURRENT_USER
variable. DB_FILE_ID
-
Unique filesystem-level ID of the current database.
DB_GUID
-
GUID of the current database.
DB_NAME
-
Canonical name of current database; either the full path to the database or — if connecting via the path is disallowed — its alias.
DECFLOAT_ROUND
-
Rounding mode of the current connection used in operations with
DECFLOAT
values. See alsoSET DECFLOAT
. DECFLOAT_TRAPS
-
Exceptional conditions for the current connection in operations with
DECFLOAT
values that cause a trap. See alsoSET DECFLOAT
. EFFECTIVE_USER
-
Effective user at the point
RDB$GET_CONTEXT
is called; indicates privileges of which user is currently used to execute a function, procedure, trigger. ENGINE_VERSION
-
The Firebird engine (server) version.
EXT_CONN_POOL_ACTIVE_COUNT
-
Count of active connections associated with the external connection pool.
EXT_CONN_POOL_IDLE_COUNT
-
Count of currently inactive connections available in the connection pool.
EXT_CONN_POOL_LIFETIME
-
External connection pool idle connection lifetime, in seconds.
EXT_CONN_POOL_SIZE
-
External connection pool size.
GLOBAL_CN
-
Most current value of global Commit Number counter.
ISOLATION_LEVEL
-
The isolation level of the current transaction:
'READ COMMITTED'
,'SNAPSHOT'
or'CONSISTENCY'
. LOCK_TIMEOUT
-
Lock timeout of the current transaction.
NETWORK_PROTOCOL
-
The protocol used for the connection:
'TCPv4'
,'TCPv6'
,'XNET'
orNULL
. PARALLEL_WORKERS
-
The maximum number of parallel workers of the connection.
READ_ONLY
-
Returns
'TRUE'
if current transaction is read-only and'FALSE'
otherwise. REPLICA_MODE
-
Replica mode of the database:
'READ-ONLY'
,'READ-WRITE'
andNULL
. REPLICATION_SEQUENCE
-
Current replication sequence (number of the latest segment written to the replication journal).
SESSION_ID
-
Same as global
CURRENT_CONNECTION
variable. SESSION_IDLE_TIMEOUT
-
Connection-level idle timeout, or
0
if no timeout was set. When0
is reported the databaseConnectionIdleTimeout
fromdatabases.conf
orfirebird.conf
applies. SESSION_TIMEZONE
-
Current session time zone.
SNAPSHOT_NUMBER
-
Current snapshot number for the transaction executing this statement. For
SNAPSHOT
andSNAPSHOT TABLE STABILITY
, this number is stable for the duration of the transaction; forREAD COMMITTED
this number will change (increment) as concurrent transactions are committed. STATEMENT_TIMEOUT
-
Connection-level statement timeout, or
0
if no timeout was set. When0
is reported the databaseStatementTimeout
fromdatabases.conf
orfirebird.conf
applies. TRANSACTION_ID
-
Same as global
CURRENT_TRANSACTION
variable. WIRE_COMPRESSED
-
Compression status of the current connection. If the connection is compressed, returns
TRUE
; if it is not compressed, returnsFALSE
. ReturnsNULL
if the connection is embedded. WIRE_CRYPT_PLUGIN
-
If connection is encrypted - returns name of current plugin, otherwise
NULL
. WIRE_ENCRYPTED
-
Encryption status of the current connection. If the connection is encrypted, returns
TRUE
; if it is not encrypted, returnsFALSE
. ReturnsNULL
if the connection is embedded.
The DDL_TRIGGER
Namespace
The DDL_TRIGGER
namespace is valid only when a DDL trigger is running.
Its use is also valid in stored procedures and functions when called by DDL triggers.
The DDL_TRIGGER
context works like a stack.
Before a DDL trigger is fired, the values relative to the executed command are pushed onto this stack.
After the trigger finishes, the values are popped.
So in the case of cascade DDL statements, when a user DDL command fires a DDL trigger and this trigger executes another DDL command with EXECUTE STATEMENT
, the values of the DDL_TRIGGER
namespace are the ones relative to the command that fired the last DDL trigger on the call stack.
EVENT_TYPE
-
event type (
CREATE
,ALTER
,DROP
) OBJECT_TYPE
-
object type (
TABLE
,VIEW
, etc) DDL_EVENT
-
event name (
<ddl event item>
), where<ddl event item>
isEVENT_TYPE || ' ' || OBJECT_TYPE
OBJECT_NAME
-
metadata object name
OLD_OBJECT_NAME
-
for tracking the renaming of a domain (see note)
NEW_OBJECT_NAME
-
for tracking the renaming of a domain (see note)
SQL_TEXT
-
sql statement text
|
8.1.2. RDB$SET_CONTEXT()
Creates, sets or clears a variable in one of the user-writable namespaces
INTEGER
RDB$SET_CONTEXT ('<namespace>', <varname>, <value> | NULL) <namespace> ::= USER_SESSION | USER_TRANSACTION <varname> ::= A case-sensitive quoted string of max. 80 characters <value> ::= A value of any type, as long as it's castable to a VARCHAR(255)
Parameter | Description |
---|---|
namespace |
Namespace |
varname |
Variable name. Case-sensitive. Maximum length is 80 characters |
value |
Data of any type provided it can be cast to |
The USER_SESSION
and USER_TRANSACTION
namespaces are initially empty.
A user can create and set variables with RDB$SET_CONTEXT()
and retrieve them with RDB$GET_CONTEXT()
.
The USER_SESSION
context is bound to the current connection, the USER_TRANSACTION
context to the current transaction.
-
When a transaction ends, its
USER_TRANSACTION
context is cleared. -
When a connection is closed, its
USER_SESSION
context is cleared. -
When a connection is reset using
ALTER SESSION RESET
, theUSER_TRANSACTION
andUSER_SESSION
contexts are cleared.
The function returns 1
when the variable already existed before the call and 0
when it didn’t.
To remove a variable from a context, set it to NULL
.
If the given namespace doesn’t exist, an error is raised.
Both namespace and variable names must be entered as single-quoted, case-sensitive, non-NULL
strings.
|
select rdb$set_context('USER_SESSION', 'MyVar', 493) from rdb$database
rdb$set_context('USER_SESSION', 'RecordsFound', RecCounter);
select rdb$set_context('USER_TRANSACTION', 'Savepoints', 'Yes')
from rdb$database
8.2. Mathematical Functions
8.2.1. ABS()
Absolute value
Numerical, matching input type
ABS (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
8.2.2. ACOS()
Arc cosine
DOUBLE PRECISION
ACOS (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type within the range [-1, 1] |
-
The result is an angle in the range [0, pi].
8.2.3. ACOSH()
Inverse hyperbolic cosine
DOUBLE PRECISION
ACOSH (number)
Parameter | Description |
---|---|
number |
Any non- |
The result is in the range [0, INF].
8.2.4. ASIN()
Arc sine
DOUBLE PRECISION
ASIN (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type within the range [-1, 1] |
The result is an angle in the range [-pi/2, pi/2].
8.2.5. ASINH()
Inverse hyperbolic sine
DOUBLE PRECISION
ASINH (number)
Parameter | Description |
---|---|
number |
Any non- |
The result is in the range [-INF, INF].
8.2.6. ATAN()
Arc tangent
DOUBLE PRECISION
ATAN (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
The result is an angle in the range <-pi/2, pi/2>.
8.2.7. ATAN2()
Two-argument arc tangent
DOUBLE PRECISION
ATAN2 (y, x)
Parameter | Description |
---|---|
y |
An expression of a numeric type |
x |
An expression of a numeric type |
Returns the angle whose sine-to-cosine ratio is given by the two arguments, and whose sine and cosine signs correspond to the signs of the arguments. This allows results across the entire circle, including the angles -pi/2 and pi/2.
-
The result is an angle in the range [-pi, pi].
-
If x is negative, the result is pi if y is 0, and -pi if y is -0.
-
If both y and x are 0, the result is meaningless. An error will be raised if both arguments are 0.
-
A fully equivalent description of this function is the following:
ATAN2(y, x)
is the angle between the positive X-axis and the line from the origin to the point (x, y). This also makes it obvious thatATAN2(0, 0)
is undefined. -
If x is greater than 0,
ATAN2(y, x)
is the same asATAN(y/x)
. -
If both sine and cosine of the angle are already known,
ATAN2(sin, cos)
gives the angle.
8.2.8. ATANH()
Inverse hyperbolic tangent
DOUBLE PRECISION
ATANH (number)
Parameter | Description |
---|---|
number |
Any non- |
The result is a number in the range [-INF, INF].
8.2.9. CEIL()
, CEILING()
Ceiling of a number
BIGINT
or INT128
for exact numeric number, or DOUBLE PRECISION
or DECFLOAT
for floating point number
CEIL[ING] (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
Returns the smallest whole number greater than or equal to the argument.
8.2.10. COS()
Cosine
DOUBLE PRECISION
COS (angle)
Parameter | Description |
---|---|
angle |
An angle in radians |
The result is in the range [-1, 1].
8.2.11. COSH()
Hyperbolic cosine
DOUBLE PRECISION
COSH (number)
Parameter | Description |
---|---|
number |
A number of a numeric type |
The result is in the range [1, INF].
8.2.12. COT()
Cotangent
DOUBLE PRECISION
COT (angle)
Parameter | Description |
---|---|
angle |
An angle in radians |
8.2.13. EXP()
Natural exponent
DOUBLE PRECISION
EXP (number)
Parameter | Description |
---|---|
number |
A number of a numeric type |
Returns the natural exponential, enumber
8.2.14. FLOOR()
Floor of a number
BIGINT
or INT128
for exact numeric number, or DOUBLE PRECISION
or DECFLOAT
for floating point number
FLOOR (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
Returns the largest whole number smaller than or equal to the argument.
8.2.15. LN()
Natural logarithm
DOUBLE PRECISION
LN (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
An error is raised if the argument is negative or 0.
8.2.16. LOG()
Logarithm with variable base
DOUBLE PRECISION
LOG (x, y)
Parameter | Description |
---|---|
x |
Base. An expression of a numeric type |
y |
An expression of a numeric type |
Returns the x-based logarithm of y.
-
If either argument is 0 or below, an error is raised.
-
If both arguments are 1,
NaN
is returned. -
If x = 1 and y < 1,
-INF
is returned. -
If x = 1 and y > 1,
INF
is returned.
8.2.17. LOG10()
Decimal (base-10) logarithm
DOUBLE PRECISION
LOG10 (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
An error is raised if the argument is negative or 0.
8.2.18. MOD()
Remainder
SMALLINT
, INTEGER
or BIGINT
depending on the type of a.
If a is a floating-point type, the result is a BIGINT
.
MOD (a, b)
Parameter | Description |
---|---|
a |
An expression of a numeric type |
b |
An expression of a numeric type |
Returns the remainder of an integer division.
-
Non-integer arguments are rounded before the division takes place. So, “
mod(7.5, 2.5)
” gives 2 (“mod(8, 3)
”), not 0. -
Do not confuse
MOD()
with the mathematical modulus operator; e.g. mathematically,-21 mod 4
is 3, while Firebird’sMOD(-21, 4)
is -1. In other words,MOD()
behaves as%
in languages like C and Java.
8.2.20. POWER()
Power
DOUBLE PRECISION
POWER (x, y)
Parameter | Description |
---|---|
x |
An expression of a numeric type |
y |
An expression of a numeric type |
Returns x to the power of y (xy).
8.2.21. RAND()
Generates a random number
DOUBLE PRECISION
RAND ()
Returns a random number between 0 and 1.
8.2.22. ROUND()
single argument: integer type, DOUBLE PRECISION
or DECFLOAT
;
two arguments: numerical, matching first argument
ROUND (number [, scale])
Parameter | Description |
---|---|
number |
An expression of a numeric type |
scale |
An integer specifying the number of decimal places toward which rounding is to be performed, e.g.:
|
Rounds a number to the nearest integer.
If the fractional part is exactly 0.5
, rounding is upward for positive numbers and downward for negative numbers.
With the optional scale argument, the number can be rounded to powers-of-ten multiples (tens, hundreds, tenths, hundredths, etc.).
If you are used to the behaviour of the external function |
ROUND
Examples
If the scale argument is present, the result usually has the same scale as the first argument:
ROUND(123.654, 1) -- returns 123.700 (not 123.7)
ROUND(8341.7, -3) -- returns 8000.0 (not 8000)
ROUND(45.1212, 0) -- returns 45.0000 (not 45)
Otherwise, the result scale is 0:
ROUND(45.1212) -- returns 45
8.2.23. SIGN()
Sign or signum
SMALLINT
SIGN (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
Returns the sign of the argument: -1, 0 or 1
-
number < 0
→-1
-
number = 0
→0
-
number > 0
→1
8.2.24. SIN()
Sine
DOUBLE PRECISION
SIN (angle)
Parameter | Description |
---|---|
angle |
An angle, in radians |
The result is in the range [-1, 1].
8.2.25. SINH()
Hyperbolic sine
DOUBLE PRECISION
SINH (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
8.2.26. SQRT()
Square root
DOUBLE PRECISION
SQRT (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
If number is negative, an error is raised.
8.2.27. TAN()
Tangent
DOUBLE PRECISION
TAN (angle)
Parameter | Description |
---|---|
angle |
An angle, in radians |
8.2.28. TANH()
Hyperbolic tangent
DOUBLE PRECISION
TANH (number)
Parameter | Description |
---|---|
number |
An expression of a numeric type |
Due to rounding, the result is in the range [-1, 1] (mathematically, it’s <-1, 1>).
8.2.29. TRUNC()
Truncate number
single argument: integer type, DOUBLE PRECISION
or DECFLOAT
;
two arguments: numerical, matching first argument
TRUNC (number [, scale])
Parameter | Description |
---|---|
number |
An expression of a numeric type |
scale |
An integer specifying the number of decimal places toward which truncating is to be performed, e.g.:
|
The single argument variant returns the integer part of a number. With the optional scale argument, the number can be truncated to powers-of-ten multiples (tens, hundreds, tenths, hundredths, etc.).
-
If the scale argument is present, the result usually has the same scale as the first argument, e.g.
-
TRUNC(789.2225, 2)
returns 789.2200 (not 789.22) -
TRUNC(345.4, -2)
returns 300.0 (not 300) -
TRUNC(-163.41, 0)
returns -163.00 (not -163)
-
-
Otherwise, the result scale is 0:
-
TRUNC(-163.41)
returns -163
-
If you are used to the behaviour of the external function |
8.3. String and Binary Functions
8.3.1. ASCII_CHAR()
Character from ASCII code
CHAR(1) CHARACTER SET NONE
ASCII_CHAR (code)
Parameter | Description |
---|---|
code |
An integer within the range from 0 to 255 |
Returns the ASCII character corresponding to the number passed in the argument.
|
8.3.2. ASCII_VAL()
ASCII code from string
SMALLINT
ASCII_VAL (ch)
Parameter | Description |
---|---|
ch |
A string of the |
Returns the ASCII code of the character passed in.
-
If the argument is a string with more than one character, the ASCII code of the first character is returned.
-
If the argument is an empty string, 0 is returned.
-
If the argument is
NULL
,NULL
is returned. -
If the first character of the argument string is multi-byte, an error is raised.
8.3.3. BASE64_DECODE()
Decodes a base64 string to binary
VARBINARY
or BLOB
BASE64_DECODE (base64_data)
Parameter | Description |
---|---|
base64_data |
Base64 encoded data, padded with |
BASE64_DECODE
decodes a string with base64-encoded data, and returns the decoded value as VARBINARY
or BLOB
as appropriate for the input.
If the length of the type of base64_data is not a multiple of 4, an error is raised at prepare time.
If the length of the value of base64_data is not a multiple of 4, an error is raised at execution time.
When the input is not BLOB
, the length of the resulting type is calculated as type_length * 3 / 4
, where type_length is the maximum length in characters of the input type.
8.3.4. BASE64_ENCODE()
Encodes a (binary) value to a base64 string
VARCHAR CHARACTER SET ASCII
or BLOB SUB_TYPE TEXT CHARACTER SET ASCII
BASE64_ENCODE (binary_data)
Parameter | Description |
---|---|
binary_data |
Binary data (or otherwise convertible to binary) to encode |
BASE64_ENCODE
encodes binary_data with base64, and returns the encoded value as a VARCHAR CHARACTER SET ASCII
or BLOB SUB_TYPE TEXT CHARACTER SET ASCII
as appropriate for the input.
The returned value is padded with ‘=
’ so its length is a multiple of 4.
When the input is not BLOB
, the length of the resulting type is calculated as type_length * 4 / 3
rounded up to a multiple of four, where type_length is the maximum length in bytes of the input type.
If this length exceeds the maximum length of VARCHAR
, the function returns a BLOB
.
8.3.5. BIT_LENGTH()
String or binary length in bits
INTEGER
, or BIGINT
for BLOB
BIT_LENGTH (string)
Parameter | Description |
---|---|
string |
An expression of a string type |
Gives the length in bits of the input string.
For multibyte character sets, this may be less than the number of characters times 8 times the “formal” number of bytes per character as found in RDB$CHARACTER_SETS
.
With arguments of type CHAR
, this function takes the entire formal string length (i.e. the declared length of a field or variable) into account.
If you want to obtain the “logical” bit length, not counting the trailing spaces, right-TRIM
the argument before passing it to BIT_LENGTH
.
BIT_LENGTH
Examples
select bit_length('Hello!') from rdb$database
-- returns 48
select bit_length(_iso8859_1 'Grüß di!') from rdb$database
-- returns 64: ü and ß take up one byte each in ISO8859_1
select bit_length
(cast (_iso8859_1 'Grüß di!' as varchar(24) character set utf8))
from rdb$database
-- returns 80: ü and ß take up two bytes each in UTF8
select bit_length
(cast (_iso8859_1 'Grüß di!' as char(24) character set utf8))
from rdb$database
-- returns 208: all 24 CHAR positions count, and two of them are 16-bit
8.3.6. BLOB_APPEND()
Efficient concatenation of blobs
BLOB
BLOB_APPEND(expr1, expr2 [, exprN ... ])
Parameter | Description |
---|---|
exprN |
An expression of a type convertible to |
The BLOB_APPEND
function concatenates blobs without creating intermediate BLOB
s, avoiding excessive memory consumption and growth of the database file.
The BLOB_APPEND
function takes two or more arguments and adds them to a BLOB
which remains open for further modification by a subsequent BLOB_APPEND
call.
The resulting BLOB
is left open for writing instead of being closed when the function returns.
In other words, the BLOB
can be appended as many times as required.
The engine marks the BLOB
returned by BLOB_APPEND
with an internal flag, BLB_close_on_read
, and closes it automatically when needed.
The first argument determines the behaviour of the function:
-
NULL
: new, emptyBLOB SUB_TYPE TEXT
is created, using the connection character set as the character set -
permanent
BLOB
(from a table) or temporaryBLOB
which was already closed: newBLOB
is created with the same subtype and, if subtype isTEXT
the same character set, populated with the content of the originalBLOB
. -
temporary unclosed
BLOB
with theBLB_close_on_read
flag (e.g. created by another call toBLOB_APPEND
): used as-is, remaining arguments are appended to thisBLOB
-
other data types: a new
BLOB SUB_TYPE TEXT
is created, populated with the original argument converted to string. If the original value is a character type, its character set is used (for string literals, the connection character set), otherwise the connection character set.
Other arguments can be of any type. The following behavior is defined for them:
-
NULL
s are ignored (behaves as empty string) -
BLOB
s, if necessary, are transliterated to the character set of the first argument and their contents are appended to the result -
other data types are converted to strings (as usual) and appended to the result
The BLOB_APPEND
function returns a temporary unclosed BLOB
with the BLB_close_on_read
flag.
If the first argument is such a temporary unclosed BLOB
(e.g. created by a previous call to BLOB_APPEND
), it will be used as-is, otherwise a new BLOB
is created.
Thus, a series of operations like blob = BLOB_APPEND (blob, …)
will result in the creation of at most one BLOB
(unless you try to append a BLOB
to itself).
This blob will be automatically closed by the engine when the client reads it, assigns it to a table, or uses it in other expressions that require reading the content.
Important caveats for
BLOB_APPEND
|
Use |
BLOB_APPEND
Examples
execute block
returns (b blob sub_type text)
as
begin
-- creates a new temporary not closed BLOB
-- and writes the string from the 2nd argument into it
b = blob_append(null, 'Hello ');
-- adds two strings to the temporary BLOB without closing it
b = blob_append(b, 'World', '!');
-- comparing a BLOB with a string will close it, because the BLOB needs to be read
if (b = 'Hello World!') then
begin
-- ...
end
-- creates a temporary closed BLOB by adding a string to it
b = b || 'Close';
suspend;
end
8.3.7. CHAR_LENGTH()
, CHARACTER_LENGTH()
String length in characters
INTEGER
, or BIGINT
for BLOB
CHAR_LENGTH (string) | CHARACTER_LENGTH (string)
Parameter | Description |
---|---|
string |
An expression of a string type |
Gives the length in characters of the input string.
With arguments of type CHAR
, this function returns the formal string length (i.e. the declared length of a field or variable).
If you want to obtain the “logical” length, not counting the trailing spaces, right-TRIM
the argument before passing it to CHAR[ACTER]_LENGTH
.
This function fully supports text BLOB
s of any length and character set.
CHAR_LENGTH
Examples
select char_length('Hello!') from rdb$database
-- returns 6
select char_length(_iso8859_1 'Grüß di!') from rdb$database
-- returns 8
select char_length
(cast (_iso8859_1 'Grüß di!' as varchar(24) character set utf8))
from rdb$database
-- returns 8; the fact that ü and ß take up two bytes each is irrelevant
select char_length
(cast (_iso8859_1 'Grüß di!' as char(24) character set utf8))
from rdb$database
-- returns 24: all 24 CHAR positions count
8.3.8. CRYPT_HASH()
Cryptographic hash
VARBINARY
CRYPT_HASH (value USING <hash>) <hash> ::= MD5 | SHA1 | SHA256 | SHA512
Parameter | Description |
---|---|
value |
Expression of value of any type; non-string or non-binary types are converted to string |
hash |
Cryptographic hash algorithm to apply |
CRYPT_HASH
returns a cryptographic hash calculated from the input argument using the specified algorithm.
If the input argument is not a string or binary type, it is converted to string before hashing.
This function returns a VARBINARY
with the length depending on the specified algorithm.
|
8.3.9. HASH()
Non-cryptographic hash
INTEGER
, BIGINT
HASH (value [USING <hash>]) <hash> ::= CRC32
Parameter | Description |
---|---|
value |
Expression of value of any type; non-string or non-binary types are converted to string |
hash |
Non-cryptographic hash algorithm to apply |
HASH
returns a hash value for the input argument.
If the input argument is not a string or binary type, it is converted to string before hashing.
The optional USING
clause specifies the non-cryptographic hash algorithm to apply.
When the USING
clause is absent, the legacy PJW algorithm is applied;
this is identical to its behaviour in previous Firebird versions.
This function fully supports text BLOB
s of any length and character set.
- not specified
-
When no algorithm is specified, Firebird applies the 64-bit variant of the non-cryptographic PJW hash function (also known as ELF64). This is a fast algorithm for general purposes (hash tables, etc.), but its collision quality is suboptimal. Other hash functions — specified explicitly in the
USING
clause, or cryptographic hashes throughCRYPT_HASH()
— should be used for more reliable hashing.The
HASH
function returnsBIGINT
for this algorithm CRC32
-
With
CRC32
, Firebird applies the CRC32 algorithm using the polynomial 0x04C11DB7.The
HASH
function returnsINTEGER
for this algorithm.
8.3.10. HEX_DECODE()
Decode a hexadecimal string to binary
VARBINARY
or BLOB
HEX_DECODE (hex_data)
Parameter | Description |
---|---|
hex_data |
Hex encoded data |
HEX_DECODE
decodes a string with hex-encoded data, and returns the decoded value as VARBINARY
or BLOB
as appropriate for the input.
If the length of the type of hex_data is not a multiple of 2, an error is raised at prepare time.
If the length of the value of hex_data is not a multiple of 2, an error is raised at execution time.
When the input is not BLOB
, the length of the resulting type is calculated as type_length / 2
, where type_length is the maximum length in characters of the input type.
8.3.11. HEX_ENCODE()
Encodes a (binary) value to a hexadecimal string
VARCHAR CHARACTER SET ASCII
or BLOB SUB_TYPE TEXT CHARACTER SET ASCII
HEX_ENCODE (binary_data)
Parameter | Description |
---|---|
binary_data |
Binary data (or otherwise convertible to binary) to encode |
HEX_ENCODE
encodes binary_data with hex, and returns the encoded value as a VARCHAR CHARACTER SET ASCII
or BLOB SUB_TYPE TEXT CHARACTER SET ASCII
as appropriate for the input.
When the input is not BLOB
, the length of the resulting type is calculated as type_length * 2
, where type_length is the maximum length in bytes of the input type.
If this length exceeds the maximum length of VARCHAR
, the function returns a BLOB
.
8.3.12. LEFT()
Extracts the leftmost part of a string
VARCHAR
or BLOB
LEFT (string, length)
Parameter | Description |
---|---|
string |
An expression of a string type |
length |
Integer expression. The number of characters to return |
-
This function fully supports text
BLOB
s of any length, including those with a multi-byte character set. -
If string is a
BLOB
, the result is aBLOB
. Otherwise, the result is aVARCHAR(n)
with n the length of the input string. -
If the length argument exceeds the string length, the input string is returned unchanged.
-
If the length argument is not a whole number, bankers' rounding (round-to-even) is applied, i.e. 0.5 becomes 0, 1.5 becomes 2, 2.5 becomes 2, 3.5 becomes 4, etc.
8.3.13. LOWER()
Converts a string to lowercase
(VAR)CHAR
, (VAR)BINARY
or BLOB
LOWER (string)
Parameter | Description |
---|---|
string |
An expression of a string type |
Returns the lowercase equivalent of the input string.
The exact result depends on the character set.
With ASCII
or NONE
for instance, only ASCII characters are lowercased;
with character set OCTETS
/(VAR)BINARY
, the entire string is returned unchanged.
8.3.14. LPAD()
Left-pads a string
VARCHAR
or BLOB
LPAD (str, endlen [, padstr])
Parameter | Description |
---|---|
str |
An expression of a string type |
endlen |
Output string length |
padstr |
The character or string to be used to pad the source string up to the specified length.
Default is space (“ |
Left-pads a string with spaces or with a user-supplied string until a given length is reached.
-
This function fully supports text
BLOB
s of any length and character set. -
If str is a
BLOB
, the result is aBLOB
. Otherwise, the result is aVARCHAR(endlen)
. -
If padstr is given and equal to
''
(empty string), no padding takes place. -
If endlen is less than the current string length, the string is truncated to endlen, even if padstr is the empty string.
When used on a |
LPAD
Examples
lpad ('Hello', 12) -- returns ' Hello'
lpad ('Hello', 12, '-') -- returns '-------Hello'
lpad ('Hello', 12, '') -- returns 'Hello'
lpad ('Hello', 12, 'abc') -- returns 'abcabcaHello'
lpad ('Hello', 12, 'abcdefghij') -- returns 'abcdefgHello'
lpad ('Hello', 2) -- returns 'He'
lpad ('Hello', 2, '-') -- returns 'He'
lpad ('Hello', 2, '') -- returns 'He'
8.3.15. OCTET_LENGTH()
Length in bytes (octets) of a string or binary value
INTEGER
, or BIGINT
for BLOB
OCTET_LENGTH (string)
Parameter | Description |
---|---|
string |
An expression of a string type |
Gives the length in bytes (octets) of the input string.
For multibyte character sets, this may be less than the number of characters times the “formal” number of bytes per character as found in RDB$CHARACTER_SETS
.
With arguments of type CHAR
or BINARY
, this function takes the entire formal string length (i.e. the declared length of a field or variable) into account.
If you want to obtain the “logical” byte length, not counting the trailing spaces, right-TRIM
the argument before passing it to OCTET_LENGTH
.
OCTET_LENGTH
Examples
select octet_length('Hello!') from rdb$database
-- returns 6
select octet_length(_iso8859_1 'Grüß di!') from rdb$database
-- returns 8: ü and ß take up one byte each in ISO8859_1
select octet_length
(cast (_iso8859_1 'Grüß di!' as varchar(24) character set utf8))
from rdb$database
-- returns 10: ü and ß take up two bytes each in UTF8
select octet_length
(cast (_iso8859_1 'Grüß di!' as char(24) character set utf8))
from rdb$database
-- returns 26: all 24 CHAR positions count, and two of them are 2-byte
8.3.16. OVERLAY()
Overwrites part of, or inserts into, a string
VARCHAR
or BLOB
OVERLAY (string PLACING replacement FROM pos [FOR length])
Parameter | Description |
---|---|
string |
The string into which the replacement takes place |
replacement |
Replacement string |
pos |
The position from which replacement takes place (starting position) |
length |
The number of characters that are to be overwritten |
By default, the number of characters removed from (overwritten in) the host string equals the length of the replacement string. With the optional fourth argument, a different number of characters can be specified for removal.
-
This function supports
BLOB
s of any length. -
If string or replacement is a
BLOB
, the result is aBLOB
. Otherwise, the result is aVARCHAR(n)
with n the sum of the lengths of string and replacement. -
As usual in SQL string functions, pos is 1-based.
-
If pos is beyond the end of string, replacement is placed directly after string.
-
If the number of characters from pos to the end of string is smaller than the length of replacement (or than the length argument, if present), string is truncated at pos and replacement placed after it.
-
The effect of a “
FOR 0
” clause is that replacement is inserted into string. -
If any argument is
NULL
, the result isNULL
. -
If pos or length is not a whole number, bankers' rounding (round-to-even) is applied, i.e. 0.5 becomes 0, 1.5 becomes 2, 2.5 becomes 2, 3.5 becomes 4, etc.
When used on a |
OVERLAY
Examples
overlay ('Goodbye' placing 'Hello' from 2) -- returns 'GHelloe'
overlay ('Goodbye' placing 'Hello' from 5) -- returns 'GoodHello'
overlay ('Goodbye' placing 'Hello' from 8) -- returns 'GoodbyeHello'
overlay ('Goodbye' placing 'Hello' from 20) -- returns 'GoodbyeHello'
overlay ('Goodbye' placing 'Hello' from 2 for 0) -- r. 'GHellooodbye'
overlay ('Goodbye' placing 'Hello' from 2 for 3) -- r. 'GHellobye'
overlay ('Goodbye' placing 'Hello' from 2 for 6) -- r. 'GHello'
overlay ('Goodbye' placing 'Hello' from 2 for 9) -- r. 'GHello'
overlay ('Goodbye' placing '' from 4) -- returns 'Goodbye'
overlay ('Goodbye' placing '' from 4 for 3) -- returns 'Gooe'
overlay ('Goodbye' placing '' from 4 for 20) -- returns 'Goo'
overlay ('' placing 'Hello' from 4) -- returns 'Hello'
overlay ('' placing 'Hello' from 4 for 0) -- returns 'Hello'
overlay ('' placing 'Hello' from 4 for 20) -- returns 'Hello'
8.3.17. POSITION()
Finds the position of the first or next occurrence of a substring in a string
INTEGER
POSITION (substr IN string) | POSITION (substr, string [, startpos])
Parameter | Description |
---|---|
substr |
The substring whose position is to be searched for |
string |
The string which is to be searched |
startpos |
The position in string where the search is to start |
Returns the (1-based) position of the first occurrence of a substring in a host string. With the optional third argument, the search starts at a given offset, disregarding any matches that may occur earlier in the string. If no match is found, the result is 0.
|
When used on a |
POSITION
Examples
position ('be' in 'To be or not to be') -- returns 4
position ('be', 'To be or not to be') -- returns 4
position ('be', 'To be or not to be', 4) -- returns 4
position ('be', 'To be or not to be', 8) -- returns 17
position ('be', 'To be or not to be', 18) -- returns 0
position ('be' in 'Alas, poor Yorick!') -- returns 0
8.3.18. REPLACE()
Replaces all occurrences of a substring in a string
VARCHAR
or BLOB
REPLACE (str, find, repl)
Parameter | Description |
---|---|
str |
The string in which the replacement is to take place |
find |
The string to search for |
repl |
The replacement string |
-
This function fully supports text
BLOB
s of any length and character set. -
If any argument is a
BLOB
, the result is aBLOB
. Otherwise, the result is aVARCHAR(n)
with n calculated from the lengths of str, find and repl in such a way that even the maximum possible number of replacements won’t overflow the field. -
If find is the empty string, str is returned unchanged.
-
If repl is the empty string, all occurrences of find are deleted from str.
-
If any argument is
NULL
, the result is alwaysNULL
, even if nothing would have been replaced.
When used on a |
REPLACE
Examples
replace ('Billy Wilder', 'il', 'oog') -- returns 'Boogly Woogder'
replace ('Billy Wilder', 'il', '') -- returns 'Bly Wder'
replace ('Billy Wilder', null, 'oog') -- returns NULL
replace ('Billy Wilder', 'il', null) -- returns NULL
replace ('Billy Wilder', 'xyz', null) -- returns NULL (!)
replace ('Billy Wilder', 'xyz', 'abc') -- returns 'Billy Wilder'
replace ('Billy Wilder', '', 'abc') -- returns 'Billy Wilder'
8.3.19. REVERSE()
Reverses a string
VARCHAR
REVERSE (string)
Parameter | Description |
---|---|
string |
An expression of a string type |
REVERSE
Examples
reverse ('spoonful') -- returns 'lufnoops'
reverse ('Was it a cat I saw?') -- returns '?was I tac a ti saW'
This function is useful if you want to group, search or order on string endings, e.g. when dealing with domain names or email addresses:
|
8.3.20. RIGHT()
Extracts the rightmost part of a string
VARCHAR
or BLOB
RIGHT (string, length)
Parameter | Description |
---|---|
string |
An expression of a string type |
length |
Integer. The number of characters to return |
-
This function supports text
BLOB
s of any length. -
If string is a
BLOB
, the result is aBLOB
. Otherwise, the result is aVARCHAR(n)
with n the length of the input string. -
If the length argument exceeds the string length, the input string is returned unchanged.
-
If the length argument is not a whole number, bankers' rounding (round-to-even) is applied, i.e. 0.5 becomes 0, 1.5 becomes 2, 2.5 becomes 2, 3.5 becomes 4, etc.
When used on a |
8.3.21. RPAD()
Right-pads a string
VARCHAR
or BLOB
RPAD (str, endlen [, padstr])
Parameter | Description |
---|---|
str |
An expression of a string type |
endlen |
Output string length |
endlen |
The character or string to be used to pad the source string up to the specified length.
Default is space ( |
Right-pads a string with spaces or with a user-supplied string until a given length is reached.
-
This function fully supports text
BLOB
s of any length and character set. -
If str is a
BLOB
, the result is aBLOB
. Otherwise, the result is aVARCHAR(endlen)
. -
If padstr is given and equals
''
(empty string), no padding takes place. -
If endlen is less than the current string length, the string is truncated to endlen, even if padstr is the empty string.
When used on a |
RPAD
Examples
rpad ('Hello', 12) -- returns 'Hello '
rpad ('Hello', 12, '-') -- returns 'Hello-------'
rpad ('Hello', 12, '') -- returns 'Hello'
rpad ('Hello', 12, 'abc') -- returns 'Helloabcabca'
rpad ('Hello', 12, 'abcdefghij') -- returns 'Helloabcdefg'
rpad ('Hello', 2) -- returns 'He'
rpad ('Hello', 2, '-') -- returns 'He'
rpad ('Hello', 2, '') -- returns 'He'
8.3.22. SUBSTRING()
Extracts a substring by position and length, or by SQL regular expression
VARCHAR
or BLOB
SUBSTRING ( <substring-args> ) <substring-args> ::= str FROM startpos [FOR length] | str SIMILAR <similar-pattern> ESCAPE <escape> <similar-pattern> ::= <similar-pattern-R1> <escape> " <similar-pattern-R2> <escape> " <similar-pattern-R3>
Parameter | Description |
---|---|
str |
An expression of a string type |
startpos |
Integer expression, the position from which to start retrieving the substring |
length |
The number of characters to retrieve after the startpos |
similar-pattern |
SQL regular expression pattern to search for the substring |
escape |
Escape character |
Returns a string’s substring starting at the given position, either to the end of the string or with a given length, or extracts a substring using an SQL regular expression pattern.
If any argument is NULL
, the result is also NULL
.
When used on a |
Positional SUBSTRING
In its simple, positional form (with FROM
), this function returns the substring starting at character position startpos (the first character being 1).
Without the FOR
argument, it returns all the remaining characters in the string.
With FOR
, it returns length characters or the remainder of the string, whichever is shorter.
When startpos is smaller than 1, substring behaves as if the string has 1 - startpos
extra positions before the actual first character at position 1
.
The length is considered from this imaginary start of the string, so the resulting string could be shorter than the specified length, or even empty.
The function fully supports binary and text BLOB
s of any length, and with any character set.
If str is a BLOB
, the result is also a BLOB
.
For any other argument type, the result is a VARCHAR
.
For non-BLOB
arguments, the width of the result field is always equal to the length of str, regardless of startpos and length.
So, substring('pinhead' from 4 for 2)
will return a VARCHAR(7)
containing the string 'he'
.
insert into AbbrNames(AbbrName)
select substring(LongName from 1 for 3) from LongNames;
select substring('abcdef' from 1 for 2) from rdb$database;
-- result: 'ab'
select substring('abcdef' from 2) from rdb$database;
-- result: 'bcdef'
select substring('abcdef' from 0 for 2) from rdb$database;
-- result: 'a'
-- and NOT 'ab', because there is "nothing" at position 0
select substring('abcdef' from -5 for 2) from rdb$database;
-- result: ''
-- length ends before the actual start of the string
Regular Expression SUBSTRING
In the regular expression form (with SIMILAR
), the SUBSTRING
function returns part of the string matching an SQL regular expression pattern.
If no match is found, NULL
is returned.
The SIMILAR
pattern is formed from three SQL regular expression patterns, R1, R2 and R3.
The entire pattern takes the form of R1 || '<escape>"' || R2 || '<escape>"' || R3
, where <escape> is the escape character defined in the ESCAPE
clause.
R2 is the pattern that matches the substring to extract, and is enclosed between escaped double quotes (<escape>"
, e.g. “#"
” with escape character ‘#
’).
R1 matches the prefix of the string, and R3 the suffix of the string.
Both R1 and R3 are optional (they can be empty), but the pattern must match the entire string.
In other words, it is not sufficient to specify a pattern that only finds the substring to extract.
The escaped double quotes around R2 can be compared to defining a single capture group in more common regular expression syntax like PCRE.
That is, the full pattern is equivalent to R1(R2)R3
, which must match the entire input string, and the capture group is the substring to be returned.
If any one of R1, R2, or R3 is not a zero-length string and does not have the format of an SQL regular expression, then an exception is raised.
The full SQL regular expression format is described in Syntax: SQL Regular Expressions
substring('abcabc' similar 'a#"bcab#"c' escape '#') -- bcab
substring('abcabc' similar 'a#"%#"c' escape '#') -- bcab
substring('abcabc' similar '_#"%#"_' escape '#') -- bcab
substring('abcabc' similar '#"(abc)*#"' escape '#') -- abcabc
substring('abcabc' similar '#"abc#"' escape '#') -- <null>
POSITION()
, LEFT()
, RIGHT()
, CHAR_LENGTH()
, CHARACTER_LENGTH()
, SIMILAR TO
8.3.23. TRIM()
Trims leading and/or trailing spaces or other substrings from a string
VARCHAR
or BLOB
TRIM ([<adjust>] str) <adjust> ::= {[<where>] [what]} FROM <where> ::= BOTH | LEADING | TRAILING
Parameter | Description |
---|---|
str |
An expression of a string type |
where |
The position the substring is to be removed from — |
what |
The substring that should be removed (multiple times if there are several matches) from the beginning, the end, or both sides of the input string str.
By default, it is space ( |
Removes leading and/or trailing spaces (or optionally other strings) from the input string.
If str is a BLOB
, the result is a BLOB
.
Otherwise, it is a VARCHAR(n)
with n the formal length of str.
When used on a |
TRIM
Examples
select trim (' Waste no space ') from rdb$database
-- returns 'Waste no space'
select trim (leading from ' Waste no space ') from rdb$database
-- returns 'Waste no space '
select trim (leading '.' from ' Waste no space ') from rdb$database
-- returns ' Waste no space '
select trim (trailing '!' from 'Help!!!!') from rdb$database
-- returns 'Help'
select trim ('la' from 'lalala I love you Ella') from rdb$database
-- returns ' I love you El'
select trim ('la' from 'Lalala I love you Ella') from rdb$database
-- returns 'Lalala I love you El'
8.3.24. UNICODE_CHAR()
Character from Unicode code point
CHAR(1) CHARACTER SET UTF8
UNICODE_CHAR (code)
Parameter | Description |
---|---|
code |
The Unicode code point (range 0…0x10FFFF) |
Returns the character corresponding to the Unicode code point passed in the argument.
8.3.25. UNICODE_VAL()
Unicode code point from string
INTEGER
UNICODE_VAL (ch)
Parameter | Description |
---|---|
ch |
A string of the |
Returns the Unicode code point (range 0…0x10FFFF) of the character passed in.
-
If the argument is a string with more than one character, the Unicode code point of the first character is returned.
-
If the argument is an empty string, 0 is returned.
-
If the argument is
NULL
,NULL
is returned.
8.3.26. UPPER()
Converts a string to uppercase
(VAR)CHAR
, (VAR)BINARY
or BLOB
UPPER (str)
Parameter | Description |
---|---|
str |
An expression of a string type |
Returns the uppercase equivalent of the input string.
The exact result depends on the character set.
With ASCII
or NONE
for instance, only ASCII characters are uppercased;
with character set OCTETS
/(VAR)BINARY
, the entire string is returned unchanged.
8.4. Date and Time Functions
8.4.1. DATEADD()
Adds or subtracts datetime units from a datetime value
DATE
, TIME
or TIMESTAMP
DATEADD (<args>) <args> ::= <amount> <unit> TO <datetime> | <unit>, <amount>, <datetime> <amount> ::= an integer expression (negative to subtract) <unit> ::= YEAR | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND | MILLISECOND <datetime> ::= a DATE, TIME or TIMESTAMP expression
Parameter | Description |
---|---|
amount |
An integer expression of the |
unit |
Date/time unit |
datetime |
An expression of the |
Adds the specified number of years, months, weeks, days, hours, minutes, seconds or milliseconds to a date/time value.
-
The result type is determined by the third argument.
-
With
TIMESTAMP
andDATE
arguments, all units can be used. -
With
TIME
arguments, onlyHOUR
,MINUTE
,SECOND
andMILLISECOND
can be used.
Examples of DATEADD
dateadd (28 day to current_date)
dateadd (-6 hour to current_time)
dateadd (month, 9, DateOfConception)
dateadd (-38 week to DateOfBirth)
dateadd (minute, 90, cast('now' as time))
dateadd (? year to date '11-Sep-1973')
select
cast(dateadd(-1 * extract(millisecond from ts) millisecond to ts) as varchar(30)) as t,
extract(millisecond from ts) as ms
from (
select timestamp '2014-06-09 13:50:17.4971' as ts
from rdb$database
) a
T MS ------------------------ ------ 2014-06-09 13:50:17.0000 497.1
8.4.2. DATEDIFF()
Difference between two datetime values in a datetime unit
BIGINT
, or NUMERIC(18,1)
for MILLISECOND
DATEDIFF (<args>) <args> ::= <unit> FROM <moment1> TO <moment2> | <unit>, <moment1>, <moment2> <unit> ::= YEAR | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND | MILLISECOND <momentN> ::= a DATE, TIME or TIMESTAMP expression
Parameter | Description |
---|---|
unit |
Date/time unit |
moment1 |
An expression of the |
moment2 |
An expression of the |
Returns the number of years, months, weeks, days, hours, minutes, seconds or milliseconds elapsed between two date/time values.
-
DATE
andTIMESTAMP
arguments can be combined. No other mixes are allowed. -
With
TIMESTAMP
andDATE
arguments, all units can be used. -
With
TIME
arguments, onlyHOUR
,MINUTE
,SECOND
andMILLISECOND
can be used.
-
DATEDIFF
doesn’t look at any smaller units than the one specified in the first argument. As a result,-
datediff (year, date '1-Jan-2009', date '31-Dec-2009')
returns 0, but -
datediff (year, date '31-Dec-2009', date '1-Jan-2010')
returns 1
-
-
It does, however, look at all the bigger units. So:
-
datediff (day, date '26-Jun-1908', date '11-Sep-1973')
returns 23818
-
-
A negative result value indicates that moment2 lies before moment1.
8.4.3. EXTRACT()
Extracts a datetime unit from a datetime value
SMALLINT
or NUMERIC
EXTRACT (<part> FROM <datetime>) <part> ::= YEAR | MONTH | QUARTER | WEEK | DAY | WEEKDAY | YEARDAY | HOUR | MINUTE | SECOND | MILLISECOND | TIMEZONE_HOUR | TIMEZONE_MINUTE <datetime> ::= a DATE, TIME or TIMESTAMP expression
Parameter | Description |
---|---|
part |
Date/time unit |
datetime |
An expression of the |
Extracts and returns an element from a DATE
, TIME
or TIMESTAMP
expression.
Returned Data Types and Ranges
The returned data types and possible ranges are shown in the table below.
If you try to extract a part that isn’t present in the date/time argument (e.g. SECOND
from a DATE
or YEAR
from a TIME
), an error occurs.
Part | Type | Range | Comment |
---|---|---|---|
|
|
1-9999 |
|
|
|
1-12 |
|
|
|
1-4 |
|
|
|
1-53 |
|
|
|
1-31 |
|
|
|
0-6 |
0 = Sunday |
|
|
0-365 |
0 = January 1 |
|
|
0-23 |
|
|
|
0-59 |
|
|
|
0.0000-59.9999 |
includes millisecond as fraction |
|
|
0.0-999.9 |
|
|
|
-23 - +23 |
|
|
|
-59 - +59 |
|
MILLISECOND
Extracts the millisecond value from a TIME
or TIMESTAMP
.
The data type returned is NUMERIC(9,1)
.
If you extract the millisecond from CURRENT_TIME
, be aware that this variable defaults to seconds precision, so the result will always be 0.
Extract from CURRENT_TIME(3)
or CURRENT_TIMESTAMP
to get milliseconds precision.
WEEK
Extracts the ISO-8601 week number from a DATE
or TIMESTAMP
.
ISO-8601 weeks start on a Monday and always have the full seven days.
Week 1 is the first week that has a majority (at least 4) of its days in the new year.
The first 1-3 days of the year may belong to the last week (52 or 53) of the previous year.
Likewise, a year’s final 1-3 days may belong to week 1 of the following year.
Be careful when combining Please also notice that |
8.4.4. FIRST_DAY()
Returns the first day of a time period containing a datetime value
DATE
, TIMESTAMP
(with or without time zone)
FIRST_DAY(OF <period> FROM date_or_timestamp) <period> ::= YEAR | MONTH | QUARTER | WEEK
Parameter | Description |
---|---|
date_or_timestamp |
Expression of type |
FIRST_DAY
returns a date or timestamp (same as the type of date_or_timestamp) with the first day of the year, month or week of a given date or timestamp value.
|
8.4.5. LAST_DAY()
Returns the last day of a time period containing a datetime value
DATE
, TIMESTAMP
(with or without time zone)
LAST_DAY(OF <period> FROM date_or_timestamp) <period> ::= YEAR | MONTH | QUARTER | WEEK
Parameter | Description |
---|---|
date_or_timestamp |
Expression of type |
LAST_DAY
returns a date or timestamp (same as the type of date_or_timestamp) with the last day of the year, month or week of a given date or timestamp value.
|
8.5. Type Casting Functions
8.5.1. CAST()
Converts a value from one data type to another
As specified by target_type
CAST (<expression> AS <target_type>) <target_type> ::= <domain_or_non_array_type> | <array_datatype> <domain_or_non_array_type> ::= !! See Scalar Data Types Syntax !! <array_datatype> ::= !! See Array Data Types Syntax !!
Parameter | Description |
---|---|
expression |
SQL expression |
sql_datatype |
SQL data type |
CAST
converts an expression to the desired data type or domain.
If the conversion is not possible, an error is raised.
“Shorthand” Syntax
Alternative syntax, supported only when casting a string literal to a DATE
, TIME
or TIMESTAMP
:
datatype 'date/timestring'
This syntax was already available in InterBase, but was never properly documented. In the SQL standard, this feature is called “datetime literals”.
Since Firebird 4.0, the use of |
Allowed Type Conversions
The following table shows the type conversions possible with CAST
.
From | To |
---|---|
Numeric types |
Numeric types |
|
|
|
|
|
|
Keep in mind that sometimes information is lost, for instance when you cast a TIMESTAMP
to a DATE
.
Also, the fact that types are CAST
-compatible is in itself no guarantee that a conversion will succeed.
“CAST(123456789 as SMALLINT)
” will definitely result in an error, as will “CAST('Judgement Day' as DATE)
”.
Casting Parameters
You can also cast statement parameters to a data type:
cast (? as integer)
This gives you control over the type of the parameter set up by the engine. Please notice that with statement parameters, you always need a full-syntax cast — shorthand casts are not supported.
Casting to a Domain or its Type
Casting to a domain or its base type are supported.
When casting to a domain, any constraints (NOT NULL
and/or CHECK
) declared for the domain must be satisfied, or the cast will fail.
Please be aware that a CHECK
passes if it evaluates to TRUE
or NULL
!
So, given the following statements:
create domain quint as int check (value >= 5000);
select cast (2000 as quint) from rdb$database; (1)
select cast (8000 as quint) from rdb$database; (2)
select cast (null as quint) from rdb$database; (3)
only cast number 1 will result in an error.
When the TYPE OF
modifier is used, the expression is cast to the base type of the domain, ignoring any constraints.
With domain quint
defined as above, the following two casts are equivalent and will both succeed:
select cast (2000 as type of quint) from rdb$database;
select cast (2000 as int) from rdb$database;
If TYPE OF
is used with a (VAR)CHAR
type, its character set and collation are retained:
create domain iso20 varchar(20) character set iso8859_1;
create domain dunl20 varchar(20) character set iso8859_1 collate du_nl;
create table zinnen (zin varchar(20));
commit;
insert into zinnen values ('Deze');
insert into zinnen values ('Die');
insert into zinnen values ('die');
insert into zinnen values ('deze');
select cast(zin as type of iso20) from zinnen order by 1;
-- returns Deze -> Die -> deze -> die
select cast(zin as type of dunl20) from zinnen order by 1;
-- returns deze -> Deze -> die -> Die
If a domain’s definition is changed, existing |
Casting to a Column’s Type
It is also possible to cast expressions to the type of an existing table or view column. Only the type itself is used; in the case of string types, this includes the character set but not the collation. Constraints and default values of the source column are not applied.
create table ttt (
s varchar(40) character set utf8 collate unicode_ci_ai
);
commit;
select cast ('Jag har många vänner' as type of column ttt.s)
from rdb$database;
If a column’s definition is altered, existing |
Cast Examples
A full-syntax cast:
select cast ('12' || '-June-' || '1959' as date) from rdb$database
A shorthand string-to-date cast:
update People set AgeCat = 'Old'
where BirthDate < date '1-Jan-1943'
Notice that you can drop even the shorthand cast from the example above, as the engine will understand from the context (comparison to a DATE
field) how to interpret the string:
update People set AgeCat = 'Old'
where BirthDate < '1-Jan-1943'
However, this is not always possible. The cast below cannot be dropped, otherwise the engine would find itself with an integer to be subtracted from a string:
select cast('today' as date) - 7 from rdb$database
8.6. Bitwise Functions
8.6.1. BIN_AND()
Bitwise AND
integer type (the widest type of the arguments)
SMALLINT
result is returned only if all the arguments are explicit SMALLINT
s or NUMERIC(n, 0)
with n <= 4;
otherwise small integers return an INTEGER
result.
BIN_AND (number, number [, number ...])
Parameter | Description |
---|---|
number |
A number of an integer type |
Returns the result of the bitwise AND operation on the argument(s).
8.6.2. BIN_NOT()
Bitwise NOT
integer type matching the argument
SMALLINT
result is returned only if all the arguments are explicit SMALLINT
s or NUMERIC(n, 0)
with n <= 4;
otherwise small integers return an INTEGER
result.
BIN_NOT (number)
Parameter | Description |
---|---|
number |
A number of an integer type |
Returns the result of the bitwise NOT operation on the argument, i.e. one’s complement.
8.6.3. BIN_OR()
Bitwise OR
integer type (the widest type of the arguments)
SMALLINT
result is returned only if all the arguments are explicit SMALLINT
s or NUMERIC(n, 0)
with n <= 4;
otherwise small integers return an INTEGER
result.
BIN_OR (number, number [, number ...])
Parameter | Description |
---|---|
number |
A number of an integer type |
Returns the result of the bitwise OR operation on the argument(s).
8.6.4. BIN_SHL()
Bitwise left-shift
BIGINT
or INT128
depending on the first argument
BIN_SHL (number, shift)
Parameter | Description |
---|---|
number |
A number of an integer type |
shift |
The number of bits the number value is shifted by |
Returns the first argument bitwise left-shifted by the second argument, i.e. a << b
or a·2b
.
8.6.5. BIN_SHR()
Bitwise right-shift with sign extension
BIGINT
or INT128
depending on the first argument
BIN_SHR (number, shift)
Parameter | Description |
---|---|
number |
A number of an integer type |
shift |
The number of bits the number value is shifted by |
Returns the first argument bitwise right-shifted by the second argument, i.e. a >> b
or a/2b
.
The operation performed is an arithmetic right shift (x86 SAR), meaning that the sign of the first operand is always preserved.
8.6.6. BIN_XOR()
Bitwise XOR
integer type (the widest type of the arguments)
SMALLINT
result is returned only if all the arguments are explicit SMALLINT
s or NUMERIC(n, 0)
with n <= 4;
otherwise small integers return an INTEGER
result.
BIN_XOR (number, number [, number ...])
Parameter | Description |
---|---|
number |
A number of an integer type |
Returns the result of the bitwise XOR operation on the argument(s).
8.7. UUID Functions
8.7.1. CHAR_TO_UUID()
Converts a string UUID to its binary representation
BINARY(16)
CHAR_TO_UUID (ascii_uuid)
Parameter | Description |
---|---|
ascii_uuid |
A 36-character representation of UUID.
‘ |
Converts a human-readable 36-char UUID string to the corresponding 16-byte UUID.
CHAR_TO_UUID
Examples
select char_to_uuid('A0bF4E45-3029-2a44-D493-4998c9b439A3') from rdb$database
-- returns A0BF4E4530292A44D4934998C9B439A3 (16-byte string)
select char_to_uuid('A0bF4E45-3029-2A44-X493-4998c9b439A3') from rdb$database
-- error: -Human readable UUID argument for CHAR_TO_UUID must
-- have hex digit at position 20 instead of "X (ASCII 88)"
8.7.2. GEN_UUID()
Generates a random binary UUID
BINARY(16)
GEN_UUID ()
Returns a universally unique ID as a 16-byte character string.
8.7.3. UUID_TO_CHAR()
Converts a binary UUID to its string representation
CHAR(36)
UUID_TO_CHAR (uuid)
Parameter | Description |
---|---|
uuid |
16-byte UUID |
Converts a 16-byte UUID to its 36-character, human-readable ASCII representation.
UUID_TO_CHAR
Examples
select uuid_to_char(x'876C45F4569B320DBCB4735AC3509E5F') from rdb$database
-- returns '876C45F4-569B-320D-BCB4-735AC3509E5F'
select uuid_to_char(gen_uuid()) from rdb$database
-- returns e.g. '680D946B-45FF-DB4E-B103-BB5711529B86'
select uuid_to_char('Firebird swings!') from rdb$database
-- returns '46697265-6269-7264-2073-77696E677321'
8.8. Functions for Sequences (Generators)
8.8.1. GEN_ID()
Increments a sequence (generator) value and returns its new value
BIGINT
— dialect 2 and 3
INTEGER
— dialect 1
GEN_ID (generator-name, step)
Parameter | Description |
---|---|
generator-name |
Identifier name of a generator (sequence) |
step |
An integer expression of the increment |
If step equals 0, the function will leave the value of the generator unchanged and return its current value.
The SQL-compliant NEXT VALUE FOR
syntax is preferred, except when an increment other than the configured increment of the sequence is needed.
If the value of the step parameter is less than zero, it will decrease the value of the generator. You should be cautious with such manipulations in the database, as they could compromise data integrity (meaning, subsequent insert statements could fail due to generating of duplicate id values). |
8.9. Conditional Functions
8.9.1. COALESCE()
Returns the first non-NULL
argument
Depends on input
COALESCE (<exp1>, <exp2> [, <expN> ... ])
Parameter | Description |
---|---|
exp1, exp2 … expN |
A list of expressions of compatible types |
The COALESCE
function takes two or more arguments and returns the value of the first non-NULL
argument.
If all the arguments evaluate to NULL
, the result is NULL
.
COALESCE
Examples
This example picks the Nickname
from the Persons
table.
If it happens to be NULL
, it goes on to FirstName
.
If that too is NULL
, “'Mr./Mrs.'
” is used.
Finally, it adds the family name.
All in all, it tries to use the available data to compose a full name that is as informal as possible.
This scheme only works if absent nicknames and first names are NULL
: if one of them is an empty string, COALESCE
will happily return that to the caller.
That problem can be fixed by using NULLIF()
.
select
coalesce (Nickname, FirstName, 'Mr./Mrs.') || ' ' || LastName
as FullName
from Persons
8.9.2. DECODE()
Shorthand “simple CASE
”-equivalent function
Depends on input
DECODE(<testexpr>, <expr1>, <result1> [<expr2>, <result2> ...] [, <defaultresult>])
Parameter | Description |
---|---|
testexpr |
An expression of any compatible type that is compared to the expressions expr1, expr2 … exprN |
expr1, expr2, … exprN |
Expressions of any compatible types, to which the testexpr expression is compared |
result1, result2, … resultN |
Returned values of any type |
defaultresult |
The expression to be returned if none of the conditions is met |
DECODE
is a shorthand for the so-called “simple CASE
” construct, in which a given expression is compared to a number of other expressions until a match is found.
The result is determined by the value listed after the matching expression.
If no match is found, the default result is returned, if present, otherwise NULL
is returned.
The equivalent CASE
construct:
CASE <testexpr> WHEN <expr1> THEN <result1> [WHEN <expr2> THEN <result2> ...] [ELSE <defaultresult>] END
Matching is done with the ‘ |
8.9.3. IIF()
Ternary conditional function
Depends on input
IIF (<condition>, ResultT, ResultF)
Parameter | Description |
---|---|
condition |
A true|false expression |
resultT |
The value returned if the condition is true |
resultF |
The value returned if the condition is false |
IIF
takes three arguments.
If the first evaluates to true
, the second argument is returned;
otherwise the third is returned.
IIF
could be likened to the ternary “<condition> ? resultT : resultF
” operator in C-like languages.
|
8.9.4. MAXVALUE()
Returns the maximum value of its arguments
Varies according to input — result will be of the same data type as the first expression in the list (expr1).
MAXVALUE (<expr1> [, ... , <exprN> ])
Parameter | Description |
---|---|
expr1 … exprN |
List of expressions of compatible types |
Returns the maximum value from a list of numerical, string, or date/time expressions.
This function fully supports text BLOB
s of any length and character set.
If one or more expressions resolve to NULL
, MAXVALUE
returns NULL
.
This behaviour differs from the aggregate function MAX
.
8.9.5. MINVALUE()
Returns the minimum value of its arguments
Varies according to input — result will be of the same data type as the first expression in the list (expr1).
MINVALUE (<expr1> [, ... , <exprN> ])
Parameter | Description |
---|---|
expr1 … exprN |
List of expressions of compatible types |
Returns the minimum value from a list of numerical, string, or date/time expressions.
This function fully supports text BLOB
s of any length and character set.
If one or more expressions resolve to NULL
, MINVALUE
returns NULL
.
This behaviour differs from the aggregate function MIN
.
8.9.6. NULLIF()
Conditional NULL
function
Depends on input
NULLIF (<exp1>, <exp2>)
Parameter | Description |
---|---|
exp1 |
An expression |
exp2 |
Another expression of a data type compatible with exp1 |
NULLIF
returns the value of the first argument, unless it is equal to the second.
In that case, NULL
is returned.
NULLIF
Example
select avg( nullif(Weight, -1) ) from FatPeople
This will return the average weight of the persons listed in FatPeople, excluding those having a weight of -1, since AVG
skips NULL
data.
Presumably, -1 indicates “weight unknown” in this table.
A plain AVG(Weight)
would include the -1 weights, thus skewing the result.
8.10. Special Functions for DECFLOAT
8.10.1. COMPARE_DECFLOAT()
Compares two DECFLOAT
values to be equal, different or unordered
SMALLINT
COMPARE_DECFLOAT (decfloat1, decfloat2)
Parameter | Description |
---|---|
decfloatn |
Value or expression of type |
The result is a SMALLINT
value, as follows:
0
|
Values are equal |
1
|
First value is less than second |
2
|
First value is greater than second |
3
|
Values are unordered, i.e. one or both is |
Unlike the comparison operators (‘<
’, ‘=
’, ‘>
’, etc.), comparison is exact: COMPARE_DECFLOAT(2.17, 2.170)
returns 2
not 0
.
8.10.2. NORMALIZE_DECFLOAT()
Returns the simplest, normalized form of a DECFLOAT
DECFLOAT
NORMALIZE_DECFLOAT (decfloat_value)
Parameter | Description |
---|---|
decfloat_value |
Value or expression of type |
For any non-zero value, trailing zeroes are removed with appropriate correction of the exponent.
8.10.3. QUANTIZE()
Returns a value that is equal in value — except for rounding — to the first argument, but with the same exponent as the second argument
DECFLOAT
QUANTIZE (decfloat_value, exp_value)
Parameter | Description |
---|---|
decfloat_value |
Value or expression to quantize;
needs to be of type |
exp_value |
Value or expression to use for its exponent;
needs to be of type |
QUANTIZE
returns a DECFLOAT
value that is equal in value and sign (except for rounding) to decfloat_value, and that has an exponent equal to the exponent of exp_value.
The type of the return value is DECFLOAT(16)
if both arguments are DECFLOAT(16)
, otherwise the result type is DECFLOAT(34)
.
The target exponent is the exponent used in the |
If the exponent of decfloat_value is greater than the one of exp_value, the coefficient of decfloat_value is multiplied by a power of ten, and its exponent decreased. If the exponent is smaller, then its coefficient is rounded using the current decfloat rounding mode, and its exponent is increased.
When it is not possible to achieve the target exponent because the coefficient would exceed the target precision (16 or 34 decimal digits), either a “Decfloat float invalid operation” error is raised or NaN
is returned (depending on the current decfloat traps configuration).
There are almost no restrictions on the exp_value.
However, in almost all usages, NaN
/sNaN
/Infinity
will produce an exception (unless allowed by the current decfloat traps configuration), NULL
will make the function return NULL
, and so on.
8.10.4. TOTALORDER()
Determines the total or linear order of its arguments
SMALLINT
TOTALORDER (decfloat1, decfloat2)
Parameter | Description |
---|---|
decfloatn |
Value or expression of type |
TOTALORDER
compares two DECFLOAT
values including any special values.
The comparison is exact, and returns a SMALLINT
, one of:
-1
|
First value is less than second |
0
|
Values are equal |
1
|
First value is greater than second. |
For TOTALORDER
comparisons, DECFLOAT
values are ordered as follows:
-NaN < -sNaN < -INF < -0.1 < -0.10 < -0 < 0 < 0.10 < 0.1 < INF < sNaN < NaN
8.11. Cryptographic Functions
8.11.1. DECRYPT()
Decrypts data using a symmetric cipher
VARBINARY
or BLOB
DECRYPT ( encrypted_input
USING <algorithm> [MODE <mode>]
KEY key
[IV iv] [<ctr_type>] [CTR_LENGTH ctr_length]
[COUNTER initial_counter] )
!! See syntax of <<fblangref50-scalarfuncs-encrypt,ENCRYPT
>> for further rules !!
Parameter | Description |
---|---|
encrypted_input |
Encrypted input as a blob or (binary) string |
See |
|
8.11.2. ENCRYPT()
Encrypts data using a symmetric cipher
VARBINARY
or BLOB
ENCRYPT ( input USING <algorithm> [MODE <mode>] KEY key [IV iv] [<ctr_type>] [CTR_LENGTH ctr_length] [COUNTER initial_counter] ) <algorithm> ::= <block_cipher> | <stream_cipher> <block_cipher> ::= AES | ANUBIS | BLOWFISH | KHAZAD | RC5 | RC6 | SAFER+ | TWOFISH | XTEA <stream_cipher> ::= CHACHA20 | RC4 | SOBER128 <mode> ::= CBC | CFB | CTR | ECB | OFB <ctr_type> ::= CTR_BIG_ENDIAN | CTR_LITTLE_ENDIAN
Parameter | Description |
---|---|
input |
Input to encrypt as a blob or (binary) string |
algorithm |
The algorithm to use for decryption |
mode |
The algorithm mode; only for block ciphers |
key |
The encryption/decryption key |
iv |
Initialization vector or nonce;
should be specified for block ciphers in all modes except |
ctr_type |
Endianness of the counter;
only for |
ctr_length |
Counter length;
only for |
initial_counter |
Initial counter value;
only for |
|
Algorithm | Key size (bytes) | Block size (bytes) | Notes |
---|---|---|---|
Block Ciphers |
|||
|
16, 24, 32 |
16 |
Key size determines the AES variant: |
|
16 - 40, in steps of 4 (4x) |
16 |
|
|
8 - 56 |
8 |
|
|
16 |
8 |
|
|
8 - 128 |
8 |
|
|
8 - 128 |
16 |
|
|
16, 24, 32 |
16 |
|
|
16, 24, 32 |
16 |
|
|
16 |
8 |
|
Stream Ciphers |
|||
|
16, 32 |
1 |
Nonce size (IV) is 8 or 12 bytes. For nonce size 8, initial_counter is a 64-bit integer, for size 12, 32-bit. |
|
5 - 256 |
1 |
|
|
4x |
1 |
Nonce size (IV) is 4y bytes, the length is independent of key size. |
8.11.3. RSA_DECRYPT()
Decrypts data using an RSA private key and removes OAEP or PKCS 1.5 padding
VARBINARY
RSA_DECRYPT (encrypted_input KEY private_key [LPARAM tag_string] [HASH <hash>] [PKCS_1_5]) <hash> ::= MD5 | SHA1 | SHA256 | SHA512
Parameter | Description |
---|---|
encrypted_input |
Input data to decrypt |
private_key |
Private key to apply, PKCS#1 format |
tag_string |
An additional system-specific tag to identify which system encrypted the message;
default is |
hash |
The hash used for OAEP padding;
default is |
RSA_DECRYPT
decrypts encrypted_input using the RSA private key and then removes padding from the resulting data.
By default, OAEP padding is used.
The PKCS_1_5
option will switch to the less secure PKCS 1.5 padding.
The |
|
RSA_DECRYPT
Examples
Run the examples of the |
select cast(rsa_decrypt(rdb$get_context('USER_SESSION', 'msg')
key rdb$get_context('USER_SESSION', 'private_key')) as varchar(128))
from rdb$database;
8.11.4. RSA_ENCRYPT()
Pads data using OAEP or PKCS 1.5 and then encrypts it with an RSA public key
VARBINARY
RSA_ENCRYPT (input KEY public_key [LPARAM tag_string] [HASH <hash>] [PKCS_1_5]) <hash> ::= MD5 | SHA1 | SHA256 | SHA512
Parameter | Description |
---|---|
input |
Input data to encrypt |
public_key |
Public key to apply, PKCS#1 format |
tag_string |
An additional system-specific tag to identify which system encrypted the message;
default is |
hash |
The hash used for OAEP padding;
default is |
RSA_ENCRYPT
pads input using the OAEP or PKCS 1.5 padding scheme and then encrypts it using the specified RSA public key.
This function is normally used to encrypt short symmetric keys which are then used in block ciphers to encrypt a message.
By default, OAEP padding is used.
The PKCS_1_5
option will switch to the less secure PKCS 1.5 padding.
The |
RSA_ENCRYPT
Examples
Run the examples of the |
select rdb$set_context('USER_SESSION', 'msg', rsa_encrypt('Some message'
key rdb$get_context('USER_SESSION', 'public_key'))) from rdb$database;
8.11.5. RSA_PRIVATE()
Generates an RSA private key
VARBINARY
RSA_PRIVATE (key_length)
Parameter | Description |
---|---|
key_length |
Key length in bytes; minimum 4, maximum 1024. A size of 256 bytes (2048 bits) or larger is recommended. |
RSA_PRIVATE
generates an RSA private key of the specified length (in bytes) in PKCS#1 format.
The larger the length specified, the longer it takes for the function to generate a private key.
RSA_PRIVATE
Examples
select rdb$set_context('USER_SESSION', 'private_key', rsa_private(256))
from rdb$database;
Putting private keys in the context variables is not secure;
we’re doing it here for demonstration purposes.
SYSDBA and users with the role |
8.11.6. RSA_PUBLIC()
Generates an RSA public key
VARBINARY
RSA_PUBLIC (private_key)
Parameter | Description |
---|---|
private_key |
RSA private key in PKCS#1 format |
RSA_PUBLIC
returns the RSA public key in PKCS#1 format for the provided RSA private key (also PKCS#1 format).
RSA_PUBLIC
Examples
Run the example of the |
select rdb$set_context('USER_SESSION', 'public_key',
rsa_public(rdb$get_context('USER_SESSION', 'private_key'))) from rdb$database;
8.11.7. RSA_SIGN_HASH()
PSS encodes a message hash and signs it with an RSA private key
VARBINARY
RSA_SIGN_HASH (message_digest KEY private_key [HASH <hash>] [SALT_LENGTH salt_length] [PKCS_1_5]) <hash> ::= MD5 | SHA1 | SHA256 | SHA512
Parameter | Description |
---|---|
message_digest |
Hash of message to sign. The hash algorithm used should match hash |
private_key |
RSA private key in PKCS#1 format |
hash |
Hash to generate PSS encoding;
default is |
salt_length |
Length of the desired salt in bytes; default is 8; minimum 1, maximum 32. The recommended value is between 8 and 16. |
RSA_SIGN_HASH
performs PSS encoding of the message_digest to be signed, and signs using the RSA private key.
By default, OAEP padding is used.
The PKCS_1_5
option will switch to the less secure PKCS 1.5 padding.
The |
This function expects the hash of a message (or message digest), not the actual message. The hash argument should specify the algorithm that was used to generate that hash. A function that accepts the actual message to hash might be introduced in a future version of Firebird. |
RSA_SIGN_HASH
Examples
Run the example of the |
select rdb$set_context('USER_SESSION', 'msg',
rsa_sign_hash(crypt_hash('Test message' using sha256)
key rdb$get_context('USER_SESSION', 'private_key'))) from rdb$database;
8.11.8. RSA_VERIFY_HASH()
Verifies a message hash against a signature using an RSA public key
BOOLEAN
RSA_VERIFY_HASH (message_digest SIGNATURE signature KEY public_key [HASH <hash>] [SALT_LENGTH salt_length] [PKCS_1_5]) <hash> ::= MD5 | SHA1 | SHA256 | SHA512
Parameter | Description |
---|---|
message_digest |
Hash of message to verify. The hash algorithm used should match hash |
signature |
Expected signature of input generated by |
public_key |
RSA public key in PKCS#1 format matching the private key used to sign |
hash |
Hash to use for the message digest;
default is |
salt_length |
Length of the salt in bytes;
default is 8;
minimum 1, maximum 32.
Value must match the length used in |
RSA_VERIFY_HASH
performs PSS encoding of the message_digest to be verified, and verifies the digital signature using the provided RSA public key.
By default, OAEP padding is used.
The PKCS_1_5
option will switch to the less secure PKCS 1.5 padding.
The |
This function expects the hash of a message (or message digest), not the actual message. The hash argument should specify the algorithm that was used to generate that hash. A function that accepts the actual message to hash might be introduced in a future version of Firebird. |
RSA_VERIFY_HASH
Examples
Run the examples of the |
select rsa_verify_hash(
crypt_hash('Test message' using sha256)
signature rdb$get_context('USER_SESSION', 'msg')
key rdb$get_context('USER_SESSION', 'public_key'))
from rdb$database
8.12. Other Functions
Functions that don’t fit in any other category.
8.12.1. MAKE_DBKEY()
Creates a DBKEY value
BINARY(8)
MAKE_DBKEY (relation, recnum [, dpnum [, ppnum]])
Parameter | Description |
---|---|
relation |
Relation name or relation id |
recnum |
Record number. Either absolute (if dpnum and ppnum are absent), or relative (if dpnum present) |
dpnum |
Data page number. Either absolute (if ppnum is absent) or relative (if ppnum present) |
ppnum |
Pointer page number. |
MAKE_DBKEY
creates a DBKEY value using a relation name or ID, record number, and (optionally) logical numbers of data page and pointer page.
|
Examples of MAKE_DBKEY
-
Select record using relation name (note that relation name is uppercase)
select * from rdb$relations where rdb$db_key = make_dbkey('RDB$RELATIONS', 0)
-
Select record using relation ID
select * from rdb$relations where rdb$db_key = make_dbkey(6, 0)
-
Select all records physically residing on the first data page
select * from rdb$relations where rdb$db_key >= make_dbkey(6, 0, 0) and rdb$db_key < make_dbkey(6, 0, 1)
-
Select all records physically residing on the first data page of 6th pointer page
select * from SOMETABLE where rdb$db_key >= make_dbkey('SOMETABLE', 0, 0, 5) and rdb$db_key < make_dbkey('SOMETABLE', 0, 1, 5)
8.12.2. RDB$ERROR()
Returns PSQL error information inside a WHEN … DO
block
PSQL
Varies (see table below)
RDB$ERROR (<context>) <context> ::= GDSCODE | SQLCODE | SQLSTATE | EXCEPTION | MESSAGE
Context | Result type | Description |
---|---|---|
|
|
Firebird error code, see also |
|
|
(deprecated) SQL code, see also |
|
|
SQLstate, see also |
|
|
Name of the active user-defined exception or |
|
|
Message text of the active exception |
RDB$ERROR
returns data of the specified context about the active PSQL exception.
Its scope is confined to exception-handling blocks in PSQL (WHEN … DO
).
Outside the exception handling blocks, RDB$ERROR
always returns NULL
.
This function cannot be called from DSQL.
8.12.3. RDB$GET_TRANSACTION_CN()
Returns the commit number (“CN”) of a transaction
BIGINT
RDB$GET_TRANSACTION_CN (transaction_id)
Parameter | Description |
---|---|
transaction_id |
Transaction id |
If the return value is greater than 1, it is the actual CN of the transaction if it was committed after the database was started.
The function can also return one of the following results, indicating the commit status of the transaction:
-2
|
Transaction is dead (rolled back) |
-1
|
Transaction is in limbo |
0
|
Transaction is still active |
1
|
Transaction committed before the database started or less than the Oldest Interesting Transaction for the database |
NULL
|
Transaction number supplied is NULL or greater than Next Transaction for the database |
For more information about CN, consult the Firebird 4.0 Release Notes.
8.12.4. RDB$ROLE_IN_USE()
Checks if a role is active for the current connection
BOOLEAN
RDB$ROLE_IN_USE (role_name)
Parameter | Description |
---|---|
role_name |
String expression for the role to check.
Case-sensitive, must match the role name as stored in |
RDB$ROLE_IN_USE
returns TRUE
if the specified role is active for the current connection, and FALSE
otherwise.
Contrary to CURRENT_ROLE
— which only returns the explicitly specified role — this function can be used to check for roles that are active by default, or cumulative roles activated by an explicitly specified role.
8.12.5. RDB$SYSTEM_PRIVILEGE()
Checks if the authorization of the current connection has a system privilege
BOOLEAN
RDB$SYSTEM_PRIVILEGE (<sys_privilege>)
<sys_privilege> ::=
!! See CREATE ROLE
!!
Parameter | Description |
---|---|
sys_privilege |
System privilege |
RDB$SYSTEM_PRIVILEGE
accepts a system privilege name and returns TRUE
if the current connection has the given system privilege, and FALSE
otherwise.
The authorization of the current connection is determined by privileges of the current user, the user PUBLIC
, and the currently active roles (explicitly set or activated by default).
9. Aggregate Functions
Aggregate functions operate on groups of records, rather than on individual records or variables.
They are often used in combination with a GROUP BY
clause.
<aggregate_function> ::= aggragate_function ([<expr> [, <expr> ...]]) [FILTER (WHERE <condition>)]
The aggregate functions can also be used as window functions with the OVER ()
clause.
See Window (Analytical) Functions for more information.
Aggregate functions are available in DSQL and PSQL. Availability in ESQL is not tracked by this Language Reference.
9.1. FILTER
Clause for Aggregate Functions
The FILTER
clause extends aggregate functions (SUM
, AVG
, COUNT
, etc.) with an additional WHERE
clause.
This limits the rows processed by the aggregate functions to the rows that satisfy the conditions of both the main WHERE
clause and those inside the FILTER
clause.
It can be thought of as a more explicit form of using an aggregate function with a condition (DECODE
, CASE
, IIF
, NULLIF
) to ignore some values that would otherwise be considered by the aggregation.
The FILTER
clause can be used with any aggregate functions in aggregate or windowed (OVER
) statements, but not with window-only functions like DENSE_RANK
.
Example of FILTER
Suppose you need a query to count the rows with status = 'A'
and the row with status = 'E'
as different columns.
The old way to do it would be:
select count(decode(status, 'A', 1)) status_a,
count(decode(status, 'E', 1)) status_e
from data;
The FILTER
clause lets you express those conditions more explicitly:
select count(*) filter (where status = 'A') status_a,
count(*) filter (where status = 'E') status_e
from data;
You can use more than one |
9.2. General-purpose Aggregate Functions
9.2.1. AVG()
Average
Depends on the input type
AVG ([ALL | DISTINCT] <expr>)
Parameter | Description |
---|---|
expr |
Expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF that returns a numeric data type. Aggregate functions are not allowed as expressions |
AVG
returns the average argument value in the group.
NULL
is ignored.
-
Parameter
ALL
(the default) applies the aggregate function to all values. -
Parameter
DISTINCT
directs theAVG
function to consider only one instance of each unique value, no matter how many times this value occurs. -
If the set of retrieved records is empty or contains only
NULL
, the result will beNULL
.
The result type of AVG
depends on the input type:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9.2.2. COUNT()
Counts non-NULL
values
BIGINT
COUNT ([ALL | DISTINCT] <expr> | *)
Parameter | Description |
---|---|
expr |
Expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF that returns a numeric data type. Aggregate functions are not allowed as expressions |
COUNT
returns the number of non-null values in a group.
-
ALL
is the default: it counts all values in the set that are notNULL
. -
If
DISTINCT
is specified, duplicates are excluded from the counted set. -
If
COUNT (*)
is specified instead of the expression expr, all rows will be counted.COUNT (*)
—-
does not accept parameters
-
cannot be used with the keyword
DISTINCT
-
does not take an expr argument, since its context is column-unspecific by definition
-
counts each row separately and returns the number of rows in the specified table or group without omitting duplicate rows
-
counts rows containing
NULL
-
-
If the result set is empty or contains only
NULL
in the specified column(s), the returned count is zero.
9.2.3. LIST()
Concatenates values into a string list
BLOB
LIST ([ALL | DISTINCT] <expr> [, separator ])
Parameter | Description |
---|---|
expr |
Expression.
It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF that returns the string data type or a |
separator |
Optional alternative separator, a string expression. Comma is the default separator |
LIST
returns a string consisting of the non-NULL
argument values in the group, separated either by a comma or by a user-supplied separator.
If there are no non-NULL
values (this includes the case where the group is empty), NULL
is returned.
-
ALL
(the default) results in all non-NULL
values being listed. WithDISTINCT
, duplicates are removed, except if expr is aBLOB
. -
The optional separator argument may be any string expression. This makes it possible to specify e.g.
ascii_char(13)
as a separator. -
The expr and separator arguments support
BLOB
s of any size and character set. -
Datetime and numeric arguments are implicitly converted to strings before concatenation.
-
The result is a text
BLOB
, except when expr is aBLOB
of another subtype. -
The ordering of the list values is undefined — the order in which the strings are concatenated is determined by read order from the source set which, in tables, is not generally defined. If ordering is important, the source data can be pre-sorted using a derived table or similar.
This is a trick/workaround, and it depends on implementation details of the optimizer/execution order. This trick doesn’t always work, and it is not guaranteed to work across versions.
Some reports indicate this no longer works in Firebird 5.0, or only in more limited circumstances than in previous versions.
9.2.4. MAX()
Maximum
Returns a result of the same data type the input expression.
MAX ([ALL | DISTINCT] <expr>)
Parameter | Description |
---|---|
expr |
Expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
MAX
returns the maximum non-NULL
element in the result set.
-
If the group is empty or contains only
NULL
s, the result isNULL
. -
If the input argument is a string, the function will return the value that will be sorted last if
COLLATE
is used. -
This function fully supports text
BLOB
s of any size and character set.
9.2.5. MIN()
Minimum
Returns a result of the same data type the input expression.
MIN ([ALL | DISTINCT] <expr>)
Parameter | Description |
---|---|
expr |
Expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
MIN
returns the minimum non-NULL
element in the result set.
-
If the group is empty or contains only
NULL
s, the result isNULL
. -
If the input argument is a string, the function will return the value that will be sorted first if
COLLATE
is used. -
This function fully supports text
BLOB
s of any size and character set.
9.2.6. SUM()
Sum
Depends on the input type
SUM ([ALL | DISTINCT] <expr>)
Parameter | Description |
---|---|
expr |
Numeric expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
SUM
calculates and returns the sum of non-NULL
values in the group.
-
If the group is empty or contains only
NULL
s, the result isNULL
. -
ALL
is the default option — all values in the set that are notNULL
are processed. IfDISTINCT
is specified, duplicates are removed from the set and theSUM
evaluation is done afterward.
The result type of SUM
depends on the input type:
|
|
|
|
|
|
|
|
|
|
|
|
9.3. Statistical Aggregate Functions
9.3.1. CORR()
Correlation coefficient
DOUBLE PRECISION
CORR ( <expr1>, <expr2> )
Parameter | Description |
---|---|
exprN |
Numeric expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The CORR
function return the correlation coefficient for a pair of numerical expressions.
The function CORR(<expr1>, <expr2>)
is equivalent to
COVAR_POP(<expr1>, <expr2>) / (STDDEV_POP(<expr2>) * STDDEV_POP(<expr1>))
This is also known as the Pearson correlation coefficient.
In a statistical sense, correlation is the degree to which a pair of variables are linearly related. A linear relation between variables means that the value of one variable can to a certain extent predict the value of the other. The correlation coefficient represents the degree of correlation as a number ranging from -1 (high inverse correlation) to 1 (high correlation). A value of 0 corresponds to no correlation.
If the group or window is empty, or contains only NULL
values, the result will be NULL
.
9.3.2. COVAR_POP()
Population covariance
DOUBLE PRECISION
COVAR_POP ( <expr1>, <expr2> )
Parameter | Description |
---|---|
exprN |
Numeric expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function COVAR_POP
returns the population covariance for a pair of numerical expressions.
The function COVAR_POP(<expr1>, <expr2>)
is equivalent to
(SUM(<expr1> * <expr2>) - SUM(<expr1>) * SUM(<expr2>) / COUNT(*)) / COUNT(*)
If the group or window is empty, or contains only NULL
values, the result will be NULL
.
9.3.3. COVAR_SAMP()
Sample covariance
DOUBLE PRECISION
COVAR_SAMP ( <expr1>, <expr2> )
Parameter | Description |
---|---|
exprN |
Numeric expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function COVAR_SAMP
returns the sample covariance for a pair of numerical expressions.
The function COVAR_SAMP(<expr1>, <expr2>)
is equivalent to
(SUM(<expr1> * <expr2>) - SUM(<expr1>) * SUM(<expr2>) / COUNT(*)) / (COUNT(*) - 1)
If the group or window is empty, contains only 1 row, or contains only NULL
values, the result will be NULL
.
9.3.4. STDDEV_POP()
Population standard deviation
DOUBLE PRECISION
or NUMERIC
depending on the type of expr
STDDEV_POP ( <expr> )
Parameter | Description |
---|---|
expr |
Numeric expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function STDDEV_POP
returns the population standard deviation for a group or window.
NULL
values are skipped.
The function STDDEV_POP(<expr>)
is equivalent to
SQRT(VAR_POP(<expr>))
If the group or window is empty, or contains only NULL
values, the result will be NULL
.
9.3.5. STDDEV_SAMP()
Sample standard deviation
DOUBLE PRECISION
or NUMERIC
depending on the type of expr
STDDEV_POP ( <expr> )
Parameter | Description |
---|---|
expr |
Numeric expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function STDDEV_SAMP
returns the sample standard deviation for a group or window.
NULL
values are skipped.
The function STDDEV_SAMP(<expr>)
is equivalent to
SQRT(VAR_SAMP(<expr>))
If the group or window is empty, contains only 1 row, or contains only NULL
values, the result will be NULL
.
9.3.6. VAR_POP()
Population variance
DOUBLE PRECISION
or NUMERIC
depending on the type of expr
VAR_POP ( <expr> )
Parameter | Description |
---|---|
expr |
Numeric expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function VAR_POP
returns the population variance for a group or window.
NULL
values are skipped.
The function VAR_POP(<expr>)
is equivalent to
(SUM(<expr> * <expr>) - SUM (<expr>) * SUM (<expr>) / COUNT(<expr>)) / COUNT (<expr>)
If the group or window is empty, or contains only NULL
values, the result will be NULL
.
9.3.7. VAR_SAMP()
Sample variance
DOUBLE PRECISION
or NUMERIC
depending on the type of expr
VAR_SAMP ( <expr> )
Parameter | Description |
---|---|
expr |
Numeric expression. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function VAR_POP
returns the sample variance for a group or window.
NULL
values are skipped.
The function VAR_SAMP(<expr>)
is equivalent to
(SUM(<expr> * <expr>) - SUM(<expr>) * SUM (<expr>) / COUNT (<expr>)) / (COUNT(<expr>) - 1)
If the group or window is empty, contains only 1 row, or contains only NULL
values, the result will be NULL
.
9.4. Linear Regression Aggregate Functions
Linear regression functions are useful for trend line continuation. The trend or regression line is usually a pattern followed by a set of values. Linear regression is useful to predict future values. To continue the regression line, you need to know the slope and the point of intersection with the y-axis. As set of linear functions can be used for calculating these values.
In the function syntax, y is interpreted as an x-dependent variable.
The linear regression aggregate functions take a pair of arguments, the dependent variable expression (y) and the independent variable expression (x), which are both numeric value expressions.
Any row in which either argument evaluates to NULL
is removed from the rows that qualify.
If there are no rows that qualify, then the result of REGR_COUNT
is 0
(zero), and the other linear regression aggregate functions result in NULL
.
9.4.1. REGR_AVGX()
Average of the independent variable of the regression line
DOUBLE PRECISION
REGR_AVGX ( <y>, <x> )
Parameter | Description |
---|---|
y |
Dependent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
x |
Independent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function REGR_AVGX
calculates the average of the independent variable (x) of the regression line.
The function REGR_AVGX(<y>, <x>)
is equivalent to
SUM(<exprX>) / REGR_COUNT(<y>, <x>) <exprX> :== CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <x> END
9.4.2. REGR_AVGY()
Average of the dependent variable of the regression line
DOUBLE PRECISION
REGR_AVGY ( <y>, <x> )
Parameter | Description |
---|---|
y |
Dependent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
x |
Independent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function REGR_AVGY
calculates the average of the dependent variable (y) of the regression line.
The function REGR_AVGY(<y>, <x>)
is equivalent to
SUM(<exprY>) / REGR_COUNT(<y>, <x>) <exprY> :== CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <y> END
9.4.3. REGR_COUNT()
Number of non-empty pairs of the regression line
DOUBLE PRECISION
REGR_COUNT ( <y>, <x> )
Parameter | Description |
---|---|
y |
Dependent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
x |
Independent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function REGR_COUNT
counts the number of non-empty pairs of the regression line.
The function REGR_COUNT(<y>, <x>)
is equivalent to
COUNT(*) FILTER (WHERE <x> IS NOT NULL AND <y> IS NOT NULL)
9.4.4. REGR_INTERCEPT()
Point of intersection of the regression line with the y-axis
DOUBLE PRECISION
REGR_INTERCEPT ( <y>, <x> )
Parameter | Description |
---|---|
y |
Dependent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
x |
Independent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function REGR_INTERCEPT
calculates the point of intersection of the regression line with the y-axis.
The function REGR_INTERCEPT(<y>, <x>)
is equivalent to
REGR_AVGY(<y>, <x>) - REGR_SLOPE(<y>, <x>) * REGR_AVGX(<y>, <x>)
REGR_INTERCEPT
Examples
Forecasting sales volume
with recursive years (byyear) as (
select 1991
from rdb$database
union all
select byyear + 1
from years
where byyear < 2020
),
s as (
select
extract(year from order_date) as byyear,
sum(total_value) as total_value
from sales
group by 1
),
regr as (
select
regr_intercept(total_value, byyear) as intercept,
regr_slope(total_value, byyear) as slope
from s
)
select
years.byyear as byyear,
intercept + (slope * years.byyear) as total_value
from years
cross join regr
BYYEAR TOTAL_VALUE ------ ------------ 1991 118377.35 1992 414557.62 1993 710737.89 1994 1006918.16 1995 1303098.43 1996 1599278.69 1997 1895458.96 1998 2191639.23 1999 2487819.50 2000 2783999.77 ...
9.4.5. REGR_R2()
Coefficient of determination of the regression line
DOUBLE PRECISION
REGR_R2 ( <y>, <x> )
Parameter | Description |
---|---|
y |
Dependent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
x |
Independent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The REGR_R2 function calculates the coefficient of determination, or R-squared, of the regression line.
The function REGR_R2(<y>, <x>)
is equivalent to
POWER(CORR(<y>, <x>), 2)
9.4.6. REGR_SLOPE()
Slope of the regression line
DOUBLE PRECISION
REGR_SLOPE ( <y>, <x> )
Parameter | Description |
---|---|
y |
Dependent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
x |
Independent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function REGR_SLOPE
calculates the slope of the regression line.
The function REGR_SLOPE(<y>, <x>)
is equivalent to
COVAR_POP(<y>, <x>) / VAR_POP(<exprX>) <exprX> :== CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <x> END
9.4.7. REGR_SXX()
Sum of squares of the independent variable
DOUBLE PRECISION
REGR_SXX ( <y>, <x> )
Parameter | Description |
---|---|
y |
Dependent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
x |
Independent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function REGR_SXX
calculates the sum of squares of the independent expression variable (x).
The function REGR_SXX(<y>, <x>)
is equivalent to
REGR_COUNT(<y>, <x>) * VAR_POP(<exprX>) <exprX> :== CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <x> END
9.4.8. REGR_SXY()
Sum of products of the independent variable and the dependent variable
DOUBLE PRECISION
REGR_SXY ( <y>, <x> )
Parameter | Description |
---|---|
y |
Dependent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
x |
Independent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function REGR_SXY
calculates the sum of products of independent variable expression (x) times dependent variable expression (y).
The function REGR_SXY(<y>, <x>)
is equivalent to
REGR_COUNT(<y>, <x>) * COVAR_POP(<y>, <x>)
9.4.9. REGR_SYY()
Sum of squares of the dependent variable
DOUBLE PRECISION
REGR_SYY ( <y>, <x> )
Parameter | Description |
---|---|
y |
Dependent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
x |
Independent variable of the regression line. It may contain a table column, a constant, a variable, an expression, a non-aggregate function or a UDF. Aggregate functions are not allowed as expressions. |
The function REGR_SYY
calculates the sum of squares of the dependent variable (y).
The function REGR_SYY(<y>, <x>)
is equivalent to
REGR_COUNT(<y>, <x>) * VAR_POP(<exprY>) <exprY> :== CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <y> END
10. Window (Analytical) Functions
Window functions (also known as analytical functions) are a kind of aggregation, but one that does not “reduce” a group into a single row. The columns of aggregated data are mixed with the query result set.
The window functions are used with the OVER
clause.
They may appear only in the SELECT
list, or the ORDER BY
clause of a query.
Firebird window functions may be partitioned and ordered.
Window functions are available in DSQL and PSQL. Availability in ESQL is not tracked by this Language Reference.
<window_function> ::= <aggregate-function> OVER <window-name-or-spec> | <window-function-name> ([<value-expression> [, <value-expression> ...]]) OVER <window-name-or-spec> <aggregate-function> ::= !! See Aggregate Functions !! <window-name-or-spec> ::= (<window-specification-details>) | existing_window_name <window-function-name> ::= <ranking-function> | <navigational-function> <ranking-function> ::= RANK | DENSE_RANK | PERCENT_RANK | ROW_NUMBER | CUME_DIST | NTILE <navigational-function> LEAD | LAG | FIRST_VALUE | LAST_VALUE | NTH_VALUE <window-specification-details> ::= [existing-window-name] [<window-partition-clause>] [<order-by-clause>] [<window-frame-clause>] <window-partition-clause> ::= PARTITION BY <value-expression> [, <value-expression> ...] <order-by-clause> ::= ORDER BY <sort-specification [, <sort-specification> ...] <sort-specification> ::= <value-expression> [<ordering-specification>] [<null-ordering>] <ordering-specification> ::= ASC | ASCENDING | DESC | DESCENDING <null-ordering> ::= NULLS FIRST | NULLS LAST <window-frame-clause> ::= { RANGE | ROWS } <window-frame-extent> <window-frame-extent> ::= <window-frame-start> | <window-frame-between> <window-frame-start> ::= UNBOUNDED PRECEDING | <value-expression> PRECEDING | CURRENT ROW <window-frame-between> ::= BETWEEN { UNBOUNDED PRECEDING | <value-expression> PRECEDING | CURRENT ROW | <value-expression> FOLLOWING } AND { <value-expression> PRECEDING | CURRENT ROW | <value-expression> FOLLOWING | UNBOUNDED FOLLOWING }
Argument | Description |
---|---|
value-expression |
Expression. May contain a table column, constant, variable, expression, scalar or aggregate function. Window functions are not allowed as an expression. |
aggregate-function |
An aggregate function used as a window function |
existing-window-name |
A named window defined using the |
10.1. Aggregate Functions as Window Functions
All aggregate functions — including FILTER
clause — can be used as window functions, by adding the OVER
clause.
Imagine a table EMPLOYEE
with columns ID
, NAME
and SALARY
, and the need to show each employee with their respective salary and the percentage of their salary over the payroll.
A normal query could achieve this, as follows:
select
id,
department,
salary,
salary / (select sum(salary) from employee) portion
from employee
order by id;
id department salary portion
-- ---------- ------ ----------
1 R & D 10.00 0.2040
2 SALES 12.00 0.2448
3 SALES 8.00 0.1632
4 R & D 9.00 0.1836
5 R & D 10.00 0.2040
The query is repetitive and lengthy to run, especially if EMPLOYEE
happens to be a complex view.
The same query could be specified in a much faster and more elegant way using a window function:
select
id,
department,
salary,
salary / sum(salary) OVER () portion
from employee
order by id;
Here, sum(salary) over ()
is computed with the sum of all SALARY
from the query (the EMPLOYEE
table).
10.2. Partitioning
Like aggregate functions, that may operate alone or in relation to a group, window functions may also operate on a group, which is called a “partition”.
<window function>(...) OVER (PARTITION BY <expr> [, <expr> ...])
Aggregation over a group could produce more than one row, so the result set generated by a partition is joined with the main query using the same expression list as the partition.
Continuing the EMPLOYEE
example, instead of getting the portion of each employee’s salary over the all-employees total, we would like to get the portion based on the employees in the same department:
select
id,
department,
salary,
salary / sum(salary) OVER (PARTITION BY department) portion
from employee
order by id;
id department salary portion
-- ---------- ------ ----------
1 R & D 10.00 0.3448
2 SALES 12.00 0.6000
3 SALES 8.00 0.4000
4 R & D 9.00 0.3103
5 R & D 10.00 0.3448
10.3. Ordering
The ORDER BY
sub-clause can be used with or without partitions.
The ORDER BY
clause within OVER
specifies the order in which the window function will process rows.
This order does not have to be the same as the order rows appear in the output.
There is an important concept associated with window functions: for each row there is a set of rows in its partition called the window frame.
By default, when specifying ORDER BY
, the frame consists of all rows from the beginning of the partition to the current row and rows equal to the current ORDER BY
expression.
Without ORDER BY
, the default frame consists of all rows in the partition.
As a result, for standard aggregate functions, the ORDER BY
clause produces partial aggregation results as rows are processed.
select
id,
salary,
sum(salary) over (order by salary) cumul_salary
from employee
order by salary;
id salary cumul_salary
-- ------ ------------
3 8.00 8.00
4 9.00 17.00
1 10.00 37.00
5 10.00 37.00
2 12.00 49.00
Then cumul_salary
returns the partial/accumulated (or running) aggregation (of the SUM
function).
It may appear strange that 37.00 is repeated for the ids 1 and 5, but that is how it should work.
The ORDER BY
keys are grouped together, and the aggregation is computed once (but summing the two 10.00).
To avoid this, you can add the ID
field to the end of the ORDER BY
clause.
It’s possible to use multiple windows with different orders, and ORDER BY
parts like ASC
/DESC
and NULLS FIRST/LAST
.
With a partition, ORDER BY
works the same way, but at each partition boundary the aggregation is reset.
All aggregation functions can use ORDER BY
, except for LIST()
.
10.4. Window Frames
A window frame specifies which rows to consider for the current row when evaluating the window function.
The frame comprises three pieces: unit, start bound, and end bound.
The unit can be RANGE
or ROWS
, which defines how the bounds will work.
The bounds are:
-
UNBOUNDED PRECEDING
-
<expr> PRECEDING
-
CURRENT ROW
-
<expr> FOLLOWING
-
UNBOUNDED FOLLOWING
-
With
RANGE
, theORDER BY
should specify exactly one expression, and that expression should be of a numeric, date, time, or timestamp type. For<expr> PRECEDING
, expr is subtracted from theORDER BY
expression, and for<expr> FOLLOWING
, expr is added. ForCURRENT ROW
, the expression is used as-is.All rows inside the current partition that are between the bounds are considered part of the resulting window frame.
-
With
ROWS
,ORDER BY
expressions are not limited by number or type. For this unit,<expr> PRECEDING
and<expr FOLLOWING
relate to the row position within the current partition, and not the values of the ordering keys.
Both UNBOUNDED PRECEDING
and UNBOUNDED FOLLOWING
work identical with RANGE
and ROWS
.
UNBOUNDED PRECEDING
start at the first row of the current partition, and UNBOUNDED FOLLOWING
ends at the last row of the current partition.
The frame syntax with <window-frame-start>
specifies the start-frame, with the end-frame being CURRENT ROW
.
Some window functions discard frames:
-
ROW_NUMBER
,LAG
andLEAD
always work asROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
-
DENSE_RANK
,RANK
,PERCENT_RANK
andCUME_DIST
always work asRANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
-
FIRST_VALUE
,LAST_VALUE
andNTH_VALUE
respect frames, but theRANGE
unit behaviour is identical toROWS
.
Example Using Frame
When the ORDER BY
clause is used, but a frame clause is omitted, the default considers the partition up to the current row.
When combined with SUM
, this results in a running total:
select
id,
salary,
sum(salary) over (order by salary) sum_salary
from employee
order by salary;
Result:
| id | salary | sum_salary |
|---:|-------:|-----------:|
| 3 | 8.00 | 8.00 |
| 4 | 9.00 | 17.00 |
| 1 | 10.00 | 37.00 |
| 5 | 10.00 | 37.00 |
| 2 | 12.00 | 49.00 |
On the other hand, if we apply a frame for the entire partition, we get the total for the entire partition.
select
id,
salary,
sum(salary) over (
order by salary
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) sum_salary
from employee
order by salary;
Result:
| id | salary | sum_salary |
|---:|-------:|-----------:|
| 3 | 8.00 | 49.00 |
| 4 | 9.00 | 49.00 |
| 1 | 10.00 | 49.00 |
| 5 | 10.00 | 49.00 |
| 2 | 12.00 | 49.00 |
This example is to demonstrate how this works;
the result of this example would be simpler to produce with sum(salary) over()
.
We can use a range frame to compute the count of employees with salaries between (an employee’s salary - 1) and (their salary + 1) with this query:
select
id,
salary,
count(*) over (
order by salary
RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING
) range_count
from employee
order by salary;
Result:
| id | salary | range_count |
|---:|-------:|------------:|
| 3 | 8.00 | 2 |
| 4 | 9.00 | 4 |
| 1 | 10.00 | 3 |
| 5 | 10.00 | 3 |
| 2 | 12.00 | 1 |
10.5. Named Windows
The WINDOW
clause can be used to explicitly name a window, for example to avoid repetitive or confusing expressions.
A named window can be used
-
in the
OVER
clause to reference a window definition, e.g.OVER window_name
-
as a base window of another named or inline (
OVER
) window, if it is not a window with a frame (ROWS
orRANGE
clauses)A window with a base windows cannot have
PARTITION BY
, nor override the ordering (ORDER BY
) of a base window.
10.6. Ranking Functions
The ranking functions compute the ordinal rank of a row within the window partition.
These functions can be used with or without partitioning and ordering. However, using them without ordering almost never makes sense.
The ranking functions can be used to create different type of counters.
Consider SUM(1) OVER (ORDER BY SALARY)
as an example of what they can do, each of them differently.
Following is an example query, also comparing with the SUM
behavior.
select
id,
salary,
dense_rank() over (order by salary),
rank() over (order by salary),
row_number() over (order by salary),
sum(1) over (order by salary)
from employee
order by salary;
id salary dense_rank rank row_number sum
-- ------ ---------- ---- ---------- ---
3 8.00 1 1 1 1
4 9.00 2 2 2 2
1 10.00 3 3 3 4
5 10.00 3 3 4 4
2 12.00 4 5 5 5
The difference between DENSE_RANK
and RANK
is that there is a gap related to duplicate rows (relative to the window ordering) only in RANK
.
DENSE_RANK
continues assigning sequential numbers after the duplicate salary.
On the other hand, ROW_NUMBER
always assigns sequential numbers, even when there are duplicate values.
10.6.1. CUME_DIST()
Relative rank (or, cumulative distribution) of a row within a window partition
DOUBLE PRECISION
CUME_DIST () OVER <window_name_or_spec>
CUME_DIST
is calculated as the number of rows preceding or peer of the current row divided by the number of rows in the partition.
In other words, CUME_DIST() OVER <window_name_or_spec>
is equivalent to COUNT(*) OVER <window_name_or_spec> / COUNT(*) OVER()
10.6.2. DENSE_RANK()
RANK()
, PERCENT_RANK()
Rank of rows in a partition without gaps
BIGINT
DENSE_RANK () OVER <window_name_or_spec>
Rows with the same window_order values get the same rank within the partition window_partition, if specified. The dense rank of a row is equal to the number of different rank values in the partition preceding the current row, plus one.
10.6.3. NTILE()
RANK()
, ROW_NUMBER()
Distributes the rows of the current window partition into the specified number of tiles (groups)
BIGINT
NTILE ( number_of_tiles ) OVER <window_name_or_spec>
Argument | Description |
---|---|
number_of_tiles |
Number of tiles (groups). Restricted to a positive integer literal, a named parameter (PSQL), or a positional parameter (DSQL). |
10.6.4. PERCENT_RANK()
Relative rank of a row within a window partition.
DOUBLE PRECISION
PERCENT_RANK () OVER <window_name_or_spec>
PERCENT_RANK
is calculated as the RANK()
minus 1 of the current row divided by the number of rows in the partition minus 1.
In other words, PERCENT_RANK() OVER <window_name_or_spec>
is equivalent to (RANK() OVER <window_name_or_spec> - 1) / CAST(COUNT(*) OVER() - 1 AS DOUBLE PRECISION)
10.6.5. RANK()
RANK()
, CUME_DIST()
Rank of each row in a partition
BIGINT
RANK () OVER <window_name_or_spec>
Rows with the same values of window-order get the same rank with in the partition window-partition, if specified. The rank of a row is equal to the number of rank values in the partition preceding the current row, plus one.
10.6.6. ROW_NUMBER()
Sequential row number in the partition
BIGINT
ROW_NUMBER () OVER <window_name_or_spec>
Returns the sequential row number in the partition, where 1
is the first row in each of the partitions.
10.7. Navigational Functions
The navigational functions get the simple (non-aggregated) value of an expression from another row of the query, within the same partition.
Example of Navigational Functions
select
id,
salary,
first_value(salary) over (order by salary),
last_value(salary) over (order by salary),
nth_value(salary, 2) over (order by salary),
lag(salary) over (order by salary),
lead(salary) over (order by salary)
from employee
order by salary;
id salary first_value last_value nth_value lag lead -- ------ ----------- ---------- --------- ------ ------ 3 8.00 8.00 8.00 <null> <null> 9.00 4 9.00 8.00 9.00 9.00 8.00 10.00 1 10.00 8.00 10.00 9.00 9.00 10.00 5 10.00 8.00 10.00 9.00 10.00 12.00 2 12.00 8.00 12.00 9.00 10.00 <null>
10.7.1. FIRST_VALUE()
First value of the current partition
The same as type as expr
FIRST_VALUE ( <expr> ) OVER <window_name_or_spec>
Argument | Description |
---|---|
expr |
Expression. May contain a table column, constant, variable, expression, scalar function. Aggregate functions are not allowed as an expression. |
10.7.2. LAG()
Value from row in the current partition with a given offset before the current row
The same as type as expr
LAG ( <expr> [, <offset [, <default>]]) OVER <window_name_or_spec>
Argument | Description |
---|---|
expr |
Expression. May contain a table column, constant, variable, expression, scalar function. Aggregate functions are not allowed as an expression. |
offset |
The offset in rows before the current row to get the value identified by expr.
If offset is not specified, the default is |
default |
The default value to return if offset points outside the partition.
Default is |
The LAG
function provides access to the row in the current partition with a given offset before the current row.
If offset points outside the current partition, default will be returned, or NULL
if no default was specified.
LAG
Examples
Suppose you have RATE
table that stores the exchange rate for each day.
To trace the change of the exchange rate over the past five days you can use the following query.
select
bydate,
cost,
cost - lag(cost) over (order by bydate) as change,
100 * (cost - lag(cost) over (order by bydate)) /
lag(cost) over (order by bydate) as percent_change
from rate
where bydate between dateadd(-4 day to current_date)
and current_date
order by bydate
bydate cost change percent_change ---------- ------ ------ -------------- 27.10.2014 31.00 <null> <null> 28.10.2014 31.53 0.53 1.7096 29.10.2014 31.40 -0.13 -0.4123 30.10.2014 31.67 0.27 0.8598 31.10.2014 32.00 0.33 1.0419
10.7.3. LAST_VALUE()
Last value from the current partition
The same as type as expr
LAST_VALUE ( <expr> ) OVER <window_name_or_spec>
Argument | Description |
---|---|
expr |
Expression. May contain a table column, constant, variable, expression, scalar function. Aggregate functions are not allowed as an expression. |
See also note on frame for navigational functions.
10.7.4. LEAD()
Value from a row in the current partition with a given offset after the current row
The same as type as expr
LEAD ( <expr> [, <offset [, <default>]]) OVER <window_name_or_spec>
Argument | Description |
---|---|
expr |
Expression. May contain a table column, constant, variable, expression, scalar function. Aggregate functions are not allowed as an expression. |
offset |
The offset in rows after the current row to get the value identified by expr.
If offset is not specified, the default is |
default |
The default value to return if offset points outside the partition.
Default is |
The LEAD
function provides access to the row in the current partition with a given offset after the current row.
If offset points outside the current partition, default will be returned, or NULL
if no default was specified.
10.7.5. NTH_VALUE()
The Nth value starting from the first or the last row of the current frame
The same as type as expr
NTH_VALUE ( <expr>, <offset> ) [FROM {FIRST | LAST}] OVER <window_name_or_spec>
Argument | Description |
---|---|
expr |
Expression. May contain a table column, constant, variable, expression, scalar function. Aggregate functions are not allowed as an expression. |
offset |
The offset in rows from the start ( |
The NTH_VALUE
function returns the Nth value starting from the first (FROM FIRST
) or the last (FROM LAST
) row of the current frame, see also note on frame for navigational functions.
Offset 1
with FROM FIRST
is equivalent to FIRST_VALUE
, and offset 1
with FROM LAST
is equivalent to LAST_VALUE
.
10.8. Aggregate Functions Inside Window Specification
It is possible to use aggregate functions (but not window functions) inside the OVER
clause.
In that case, first the aggregate function is applied to determine the windows, and only then the window functions are applied on those windows.
When using aggregate functions inside |
select
code_employee_group,
avg(salary) as avg_salary,
rank() over (order by avg(salary)) as salary_rank
from employee
group by code_employee_group
11. System Packages
System packages provide utility stored functions and stored functions.
RDB$BLOB_UTIL
-
Utilities for blob manipulation
RDB$PROFILER
-
Profiler
RDB$TIME_ZONE_UTIL
-
Time zone utilities
11.1. RDB$BLOB_UTIL
Package of functions and procedures for blob manipulation
11.1.1. Function IS_WRITABLE
RDB$BLOB_UTIL.IS_WRITABLE
returns TRUE
when a BLOB is suitable for data appending using BLOB_APPEND
without copying.
-
BLOB
typeBLOB NOT NULL
Return type: BOOLEAN NOT NULL
.
11.1.2. Function NEW_BLOB
RDB$BLOB_UTIL.NEW_BLOB
creates a new BLOB SUB_TYPE BINARY.
It returns a BLOB suitable for data appending, similar to BLOB_APPEND
.
The advantage over BLOB_APPEND
is that it’s possible to set custom SEGMENTED
and TEMP_STORAGE
options.
BLOB_APPEND
always creates BLOBs in temporary storage, which may not always be the best approach if the created BLOB is going to be stored in a permanent table, as this will require a copy operation.
The BLOB returned from this function, even when TEMP_STORAGE = FALSE
, may be used with BLOB_APPEND
for appending data.
-
SEGMENTED
typeBOOLEAN NOT NULL
-
TEMP_STORAGE
typeBOOLEAN NOT NULL
Return type: BLOB SUB_TYPE BINARY NOT NULL
.
11.1.3. Function OPEN_BLOB
RDB$BLOB_UTIL.OPEN_BLOB
opens an existing BLOB for reading.
It returns a handle (an integer bound to the transaction) suitable for use with other functions of this package, like SEEK
, READ_DATA
and CLOSE_HANDLE
.
Handles which are not explicitly closed are closed automatically when the transaction ends.
-
BLOB
typeBLOB NOT NULL
Return type: INTEGER NOT NULL
.
11.1.4. Function READ_DATA
RDB$BLOB_UTIL.READ_DATA
reads chunks of data of a BLOB handle opened with RDB$BLOB_UTIL.OPEN_BLOB
.
When the BLOB is fully read and there is no more data, it returns NULL
.
If LENGTH
is passed with a positive number, it returns a VARBINARY with its maximum length.
If LENGTH
is NULL
it returns a segment of the BLOB with a maximum length of 32765.
-
HANDLE
typeINTEGER NOT NULL
-
LENGTH
typeINTEGER
Return type: VARBINARY(32765)
.
11.1.5. Function SEEK
RDB$BLOB_UTIL.SEEK
sets the position for the next READ_DATA
, it returns the new position.
MODE
may be:
0
|
from the start |
1
|
from current position |
2
|
from end. |
When MODE
is 2
, OFFSET
should be zero or negative.
-
HANDLE
typeINTEGER NOT NULL
-
MODE
typeINTEGER NOT NULL
-
OFFSET
typeINTEGER NOT NULL
Return type: INTEGER NOT NULL
.
SEEK
only works on stream blobs.
Attempting to seek on a segmented blob results in error “invalid BLOB type for operation”.
11.1.6. Procedure CANCEL_BLOB
RDB$BLOB_UTIL.CANCEL_BLOB
immediately releases a temporary BLOB, like one created with BLOB_APPEND
.
If the same BLOB is used after cancel, an “invalid blob id” error will be raised.
-
BLOB
typeBLOB
11.1.7. Procedure CLOSE_HANDLE
RDB$BLOB_UTIL.CLOSE_HANDLE
closes a BLOB handle opened with RDB$BLOB_UTIL.OPEN_BLOB
.
Handles which are not explicitly closed are closed automatically when the transaction ends.
-
HANDLE
typeINTEGER NOT NULL
11.1.8. Examples
EXECUTE BLOCK
execute block returns (b blob)
as
begin
-- Create a BLOB handle in the temporary space.
b = rdb$blob_util.new_blob(false, true);
-- Add chunks of data.
b = blob_append(b, '12345');
b = blob_append(b, '67');
suspend;
end
EXECUTE BLOCK
execute block returns (s varchar(10))
as
declare b blob = '1234567';
declare bhandle integer;
begin
-- Open the BLOB and get a BLOB handle.
bhandle = rdb$blob_util.open_blob(b);
-- Get chunks of data as string and return.
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
-- Here EOF is found, so it returns NULL.
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
-- Close the BLOB handle.
execute procedure rdb$blob_util.close_handle(bhandle);
end
set term !;
execute block returns (s varchar(10))
as
declare b blob;
declare bhandle integer;
begin
-- Create a stream BLOB handle.
b = rdb$blob_util.new_blob(false, true);
-- Add data.
b = blob_append(b, '0123456789');
-- Open the BLOB.
bhandle = rdb$blob_util.open_blob(b);
-- Seek to 5 since the start.
rdb$blob_util.seek(bhandle, 0, 5);
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
-- Seek to 2 since the start.
rdb$blob_util.seek(bhandle, 0, 2);
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
-- Advance 2.
rdb$blob_util.seek(bhandle, 1, 2);
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
-- Seek to -1 since the end.
rdb$blob_util.seek(bhandle, 2, -1);
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
end!
set term ;!
create table t(b blob);
set term !;
execute block returns (bool boolean)
as
declare b blob;
begin
b = blob_append(null, 'writable');
bool = rdb$blob_util.is_writable(b);
suspend;
insert into t (b) values ('not writable') returning b into b;
bool = rdb$blob_util.is_writable(b);
suspend;
end!
set term ;!
11.2. RDB$PROFILER
A package with functions and procedures to run and control the profiler.
These profiler controls are standard, but the actual profiler is a plugin.
The profiler used depends on the setting of DefaultProfilerPlugin
in firebird.conf
or databases.conf
, or the PLUGIN_NAME
parameter of START_SESSION
.
Firebird 5.0 comes with a profiler plugin called Default_Profiler.
Users are allowed to profile their own connections.
Profiling connections from other users requires the PROFILE_ANY_ATTACHMENT
system privilege.
11.2.1. Function START_SESSION
RDB$PROFILER.START_SESSION
starts a new profiler session, makes it the current session (of the given ATTACHMENT_ID
) and returns its identifier.
If FLUSH_INTERVAL
is different from NULL
, auto-flush is set up in the same way as manually calling RDB$PROFILER.SET_FLUSH_INTERVAL
.
If PLUGIN_NAME
is NULL
(the default), it uses the database configuration DefaultProfilerPlugin
.
PLUGIN_OPTIONS
are plugin specific options and currently should be NULL
for the Default_Profiler
plugin.
-
DESCRIPTION
typeVARCHAR(255) CHARACTER SET UTF8 default NULL
-
FLUSH_INTERVAL
typeINTEGER default NULL
-
ATTACHMENT_ID
typeBIGINT NOT NULL default CURRENT_CONNECTION
-
PLUGIN_NAME
typeVARCHAR(255) CHARACTER SET UTF8 default NULL
-
PLUGIN_OPTIONS
typeVARCHAR(255) CHARACTER SET UTF8 default NULL
Return type: BIGINT NOT NULL
.
11.2.2. Procedure CANCEL_SESSION
RDB$PROFILER.CANCEL_SESSION
cancels the current profiler session (of the given ATTACHMENT_ID
).
All session data present in the profiler plugin is discarded and will not be flushed.
Data already flushed is not deleted automatically.
-
ATTACHMENT_ID
typeBIGINT NOT NULL default CURRENT_CONNECTION
11.2.3. Procedure DISCARD
RDB$PROFILER.DISCARD
removes all sessions (of the given ATTACHMENT_ID
) from memory, without flushing them.
If there is an active session, it is cancelled.
-
ATTACHMENT_ID
typeBIGINT NOT NULL default CURRENT_CONNECTION
11.2.4. Procedure FINISH_SESSION
RDB$PROFILER.FINISH_SESSION
finishes the current profiler session (of the given ATTACHMENT_ID
).
If FLUSH
is TRUE
, the snapshot tables are updated with data of the finished session (and old finished sessions not yet present in the snapshot), otherwise data remains only in memory for later update.
Calling RDB$PROFILER.FINISH_SESSION(TRUE)
has the same semantics of calling RDB$PROFILER.FINISH_SESSION(FALSE)
followed by RDB$PROFILER.FLUSH
(using the same ATTACHMENT_ID
).
-
FLUSH
typeBOOLEAN NOT NULL default TRUE
-
ATTACHMENT_ID
typeBIGINT NOT NULL default CURRENT_CONNECTION
11.2.5. Procedure FLUSH
RDB$PROFILER.FLUSH
updates the snapshot tables with data from the profile sessions (of the given ATTACHMENT_ID
) in memory.
After flushing, the data is stored in tables PLG$PROF_SESSIONS
, PLG$PROF_STATEMENTS
, PLG$PROF_RECORD_SOURCES
, PLG$PROF_REQUESTS
, PLG$PROF_PSQL_STATS
and PLG$PROF_RECORD_SOURCE_STATS
and may be read and analyzed by the user.
Data is updated using an autonomous transaction, so if the procedure is called in a snapshot transaction, data will not be directly readable in the same transaction.
Once flush happens, finished sessions are removed from memory.
-
ATTACHMENT_ID
typeBIGINT NOT NULL default CURRENT_CONNECTION
11.2.6. Procedure PAUSE_SESSION
RDB$PROFILER.PAUSE_SESSION
pauses the current profiler session (of the given ATTACHMENT_ID
), so the next executed statements statistics are not collected.
If FLUSH
is TRUE
, the snapshot tables are updated with data up to the current moment, otherwise data remains only in memory for later update.
Calling RDB$PROFILER.PAUSE_SESSION(TRUE)
has the same semantics as calling RDB$PROFILER.PAUSE_SESSION(FALSE)
followed by RDB$PROFILER.FLUSH
(using the same ATTACHMENT_ID
).
-
FLUSH
typeBOOLEAN NOT NULL default FALSE
-
ATTACHMENT_ID
typeBIGINT NOT NULL default CURRENT_CONNECTION
11.2.7. Procedure RESUME_SESSION
RDB$PROFILER.RESUME_SESSION
resumes the current profiler session (of the given ATTACHMENT_ID
), if it was paused, so the next executed statements statistics are collected again.
-
ATTACHMENT_ID
typeBIGINT NOT NULL default CURRENT_CONNECTION
11.2.8. Procedure SET_FLUSH_INTERVAL
RDB$PROFILER.SET_FLUSH_INTERVAL
turns periodic auto-flush on (when FLUSH_INTERVAL
is greater than 0) or off (when FLUSH_INTERVAL
is equal to 0).
FLUSH_INTERVAL
is interpreted as number of seconds.
-
FLUSH_INTERVAL
typeINTEGER NOT NULL
-
ATTACHMENT_ID
typeBIGINT NOT NULL default CURRENT_CONNECTION
11.2.9. Example
Below is a sample profile session and queries for data analysis.
-
Preparation — create table and routines that will be analyzed
create table tab ( id integer not null, val integer not null ); set term !; create or alter function mult(p1 integer, p2 integer) returns integer as begin return p1 * p2; end! create or alter procedure ins as declare n integer = 1; begin while (n <= 1000) do begin if (mod(n, 2) = 1) then insert into tab values (:n, mult(:n, 2)); n = n + 1; end end! set term ;!
-
Start profiling
select rdb$profiler.start_session('Profile Session 1') from rdb$database; set term !; execute block as begin execute procedure ins; delete from tab; end! set term ;! execute procedure rdb$profiler.finish_session(true); execute procedure ins; select rdb$profiler.start_session('Profile Session 2') from rdb$database; select mod(id, 5), sum(val) from tab where id <= 50 group by mod(id, 5) order by sum(val); execute procedure rdb$profiler.finish_session(true);
-
Data analysis
set transaction read committed; select * from plg$prof_sessions; select * from plg$prof_psql_stats_view; select * from plg$prof_record_source_stats_view; select preq.* from plg$prof_requests preq join plg$prof_sessions pses on pses.profile_id = preq.profile_id and pses.description = 'Profile Session 1'; select pstat.* from plg$prof_psql_stats pstat join plg$prof_sessions pses on pses.profile_id = pstat.profile_id and pses.description = 'Profile Session 1' order by pstat.profile_id, pstat.request_id, pstat.line_num, pstat.column_num; select pstat.* from plg$prof_record_source_stats pstat join plg$prof_sessions pses on pses.profile_id = pstat.profile_id and pses.description = 'Profile Session 2' order by pstat.profile_id, pstat.request_id, pstat.cursor_id, pstat.record_source_id;
11.3. RDB$TIME_ZONE_UTIL
A package of time zone utility functions and procedures.
11.3.1. Function DATABASE_VERSION
RDB$TIME_ZONE_UTIL.DATABASE_VERSION
returns the version of the time zone database.
Return type: VARCHAR(10) CHARACTER SET ASCII
.
select rdb$time_zone_util.database_version()
from rdb$database;
Returns:
DATABASE_VERSION
================
2023c
11.3.2. Procedure TRANSITIONS
RDB$TIME_ZONE_UTIL.TRANSITIONS
returns the set of rules between the start and end timestamps for a named time zone.
-
RDB$TIME_ZONE_NAME
typeCHAR(63)
-
RDB$FROM_TIMESTAMP
typeTIMESTAMP WITH TIME ZONE
-
RDB$TO_TIMESTAMP
typeTIMESTAMP WITH TIME ZONE
Output parameters:
RDB$START_TIMESTAMP
-
type
TIMESTAMP WITH TIME ZONE
— The start timestamp of the transition RDB$END_TIMESTAMP
-
type
TIMESTAMP WITH TIME ZONE
— The end timestamp of the transition RDB$ZONE_OFFSET
-
type
SMALLINT
— The zone’s offset, in minutes RDB$DST_OFFSET
-
type
SMALLINT
— The zone’s DST offset, in minutes RDB$EFFECTIVE_OFFSET
-
type
SMALLINT
— Effective offset (ZONE_OFFSET
+DST_OFFSET
)
select *
from rdb$time_zone_util.transitions(
'America/Sao_Paulo',
timestamp '2017-01-01',
timestamp '2019-01-01');
Returns (RDB$
prefix left off for brevity):
START_TIMESTAMP END_TIMESTAMP ZONE_OFFSET DST_OFFSET EFFECTIVE_OFFSET ============================ ============================ =========== ========== ================ 2016-10-16 03:00:00.0000 GMT 2017-02-19 01:59:59.9999 GMT -180 60 -120 2017-02-19 02:00:00.0000 GMT 2017-10-15 02:59:59.9999 GMT -180 0 -180 2017-10-15 03:00:00.0000 GMT 2018-02-18 01:59:59.9999 GMT -180 60 -120 2018-02-18 02:00:00.0000 GMT 2018-10-21 02:59:59.9999 GMT -180 0 -180 2018-10-21 03:00:00.0000 GMT 2019-02-17 01:59:59.9999 GMT -180 60 -120
12. Context Variables
Unless explicitly mentioned otherwise in an “Available in” section, context variables are available in at least DSQL and PSQL. Availability in ESQL is — bar some exceptions — not tracked by this Language Reference.
12.1. CURRENT_CONNECTION
Unique identifier of the current connection.
BIGINT
CURRENT_CONNECTION
Its value is derived from a counter on the database header page, which is incremented for each new connection. When a database is restored, this counter is reset to zero.
select current_connection from rdb$database
execute procedure P_Login(current_connection)
12.2. CURRENT_DATE
Current server date in the session time zone
DATE
CURRENT_DATE
Within a PSQL module (procedure, trigger or executable block), the value of CURRENT_DATE
will remain constant every time it is read.
If multiple modules call or trigger each other, the value will remain constant throughout the duration of the outermost module.
If you need a progressing value in PSQL (e.g. to measure time intervals), use 'TODAY'
.
select current_date from rdb$database
-- returns e.g. 2011-10-03
12.3. CURRENT_ROLE
Current explicit role of the connection
VARCHAR(63)
CURRENT_ROLE
CURRENT_ROLE
is a context variable containing the explicitly specified role of the currently connected user.
If there is no explicitly specified role, CURRENT_ROLE
is 'NONE'
.
CURRENT_ROLE
always represents a valid role or 'NONE'
.
If a user connects with a non-existing role, the engine silently resets it to 'NONE'
without returning an error.
Roles that are active by default and not explicitly specified on connect or using SET ROLE
are not returned by CURRENT_ROLE
.
Use RDB$ROLE_IN_USE
to check for all active roles.
if (current_role <> 'MANAGER')
then exception only_managers_may_delete;
else
delete from Customers where custno = :custno;
12.4. CURRENT_TIME
Current server time in the session time zone, with time zone information
TIME WITH TIME ZONE
Data type changed in Firebird 4.0 from |
CURRENT_TIME [ (<precision>) ] <precision> ::= 0 | 1 | 2 | 3
The optional precision argument is not supported in ESQL.
Parameter | Description |
---|---|
precision |
Precision. The default value is 0. Not supported in ESQL |
The default is 0 decimals, i.e. seconds precision.
CURRENT_TIME
has a default precision of 0 decimals, where CURRENT_TIMESTAMP
has a default precision of 3 decimals.
As a result, CURRENT_TIMESTAMP
is not the exact sum of CURRENT_DATE
and CURRENT_TIME
, unless you explicitly specify a precision (i.e. CURRENT_TIME(3)
or CURRENT_TIMESTAMP(0)
).
Within a PSQL module (procedure, trigger or executable block), the value of CURRENT_TIME
will remain constant every time it is read.
If multiple modules call or trigger each other, the value will remain constant throughout the duration of the outermost module.
If you need a progressing value in PSQL (e.g. to measure time intervals), use 'NOW'
.
CURRENT_TIME and Firebird Time Zone SupportFirebird 4.0 added support for time zones.
As part of this support, an incompatibility with the Since Firebird 4.0, In Firebird 5.0, |
select current_time from rdb$database
-- returns e.g. 14:20:19.0000
select current_time(2) from rdb$database
-- returns e.g. 14:20:23.1200
12.5. CURRENT_TIMESTAMP
Current server date and time in the session time zone, with time zone information
TIMESTAMP WITH TIME ZONE
Data type changed in Firebird 4.0 from |
CURRENT_TIMESTAMP [ (<precision>) ] <precision> ::= 0 | 1 | 2 | 3
The optional precision argument is not supported in ESQL.
Parameter | Description |
---|---|
precision |
Precision. The default value is 3. Not supported in ESQL |
The default is 3 decimals, i.e. milliseconds precision.
The default precision of CURRENT_TIME
is 0 decimals, so CURRENT_TIMESTAMP
is not the exact sum of CURRENT_DATE
and CURRENT_TIME
, unless you explicitly specify a precision (i.e. CURRENT_TIME(3)
or CURRENT_TIMESTAMP(0)
).
Within a PSQL module (procedure, trigger or executable block), the value of CURRENT_TIMESTAMP
will remain constant every time it is read.
If multiple modules call or trigger each other, the value will remain constant throughout the duration of the outermost module.
If you need a progressing value in PSQL (e.g. to measure time intervals), use 'NOW'
.
CURRENT_TIMESTAMP and Firebird Time Zone SupportFirebird 4.0 added support for time zones.
As part of this support, an incompatibility with the Since Firebird 4.0, In Firebird 5.0, |
select current_timestamp from rdb$database
-- returns e.g. 2008-08-13 14:20:19.6170
select current_timestamp(2) from rdb$database
-- returns e.g. 2008-08-13 14:20:23.1200
12.6. CURRENT_TRANSACTION
Unique identifier of the current transaction
BIGINT
CURRENT_TRANSACTION
The transaction identifier is derived from a counter on the database header page, which is incremented for each new transaction. When a database is restored, this counter is reset to zero.
select current_transaction from rdb$database
New.Txn_ID = current_transaction;
12.7. CURRENT_USER
Name of the user of the current connection
VARCHAR(63)
CURRENT_USER
CURRENT_USER
is equivalent to USER
.
create trigger bi_customers for customers before insert as
begin
New.added_by = CURRENT_USER;
New.purchases = 0;
end
12.8. DELETING
Indicates if the trigger fired for a DELETE
operation
PSQL — DML triggers only
BOOLEAN
DELETING
Intended for use in multi-action triggers.
if (deleting) then
begin
insert into Removed_Cars (id, make, model, removed)
values (old.id, old.make, old.model, current_timestamp);
end
12.9. GDSCODE
Firebird error code of the error in a WHEN … DO
block
PSQL
INTEGER
GDSCODE
In a “WHEN … DO
” error handling block, the GDSCODE
context variable contains the numeric value of the current Firebird error code.
GDSCODE
is non-zero in WHEN … DO
blocks, if the current error has a Firebird error code.
Outside error handlers, GDSCODE
is always 0.
Outside PSQL, it doesn’t exist at all.
After |
when gdscode grant_obj_notfound, gdscode grant_fld_notfound,
gdscode grant_nopriv, gdscode grant_nopriv_on_base
do
begin
execute procedure log_grant_error(gdscode);
exit;
end
12.10. INSERTING
Indicates if the trigger fired for an INSERT
operation
PSQL — triggers only
BOOLEAN
INSERTING
Intended for use in multi-action triggers.
if (inserting or updating) then
begin
if (new.serial_num is null) then
new.serial_num = gen_id(gen_serials, 1);
end
12.11. LOCALTIME
Current server time in the session time zone, without time zone information
TIME WITHOUT TIME ZONE
LOCALTIME [ (<precision>) ] <precision> ::= 0 | 1 | 2 | 3
The optional precision argument is not supported in ESQL.
Parameter | Description |
---|---|
precision |
Precision. The default value is 0. Not supported in ESQL |
LOCALTIME
returns the current server time in the session time zone.
The default is 0 decimals, i.e. seconds precision.
LOCALTIME
was introduced in Firebird 3.0.4 and Firebird 2.5.9 as an alias of CURRENT_TIME
.
In Firebird 5.0, CURRENT_TIME
returns a TIME WITH TIME ZONE
instead of a TIME [WITHOUT TIME ZONE]
, while LOCALTIME
returns TIME [WITHOUT TIME ZONE]
.
It is recommended to use LOCALTIME
when you do not need time zone information.
LOCALTIME
has a default precision of 0 decimals, where LOCALTIMESTAMP
has a default precision of 3 decimals.
As a result, LOCALTIMESTAMP
is not the exact sum of CURRENT_DATE
and LOCALTIME
, unless you explicitly specify a precision (i.e. LOCALTIME(3)
or LOCALTIMESTAMP(0)
).
Within a PSQL module (procedure, trigger or executable block), the value of LOCALTIME
will remain constant every time it is read.
If multiple modules call or trigger each other, the value will remain constant throughout the duration of the outermost module.
If you need a progressing value in PSQL (e.g. to measure time intervals), use 'NOW'
.
select localtime from rdb$database
-- returns e.g. 14:20:19.0000
select localtime(2) from rdb$database
-- returns e.g. 14:20:23.1200
12.12. LOCALTIMESTAMP
Current server time and date in the session time zone, without time zone information
TIMESTAMP WITHOUT TIME ZONE
LOCALTIMESTAMP [ (<precision>) ] <precision> ::= 0 | 1 | 2 | 3
The optional precision argument is not supported in ESQL.
Parameter | Description |
---|---|
precision |
Precision. The default value is 3. Not supported in ESQL |
LOCALTIMESTAMP
returns the current server date and time in the session time zone.
The default is 3 decimals, i.e. milliseconds precision.
LOCALTIMESTAMP
was introduced in Firebird 3.0.4 and Firebird 2.5.9 as a synonym of CURRENT_TIMESTAMP
.
In Firebird 5.0, CURRENT_TIMESTAMP
returns a TIMESTAMP WITH TIME ZONE
instead of a TIMESTAMP [WITHOUT TIME ZONE]
, while LOCALTIMESTAMP
returns TIMESTAMP [WITHOUT TIME ZONE]
.
It is recommended to use LOCALTIMESTAMP
when you do not need time zone information.
The default precision of LOCALTIME
is 0 decimals, so LOCALTIMESTAMP
is not the exact sum of CURRENT_DATE
and LOCALTIME
, unless you explicitly specify a precision (i.e. LOCATIME(3)
or LOCALTIMESTAMP(0)
).
Within a PSQL module (procedure, trigger or executable block), the value of LOCALTIMESTAMP
will remain constant every time it is read.
If multiple modules call or trigger each other, the value will remain constant throughout the duration of the outermost module.
If you need a progressing value in PSQL (e.g. to measure time intervals), use 'NOW'
.
select localtimestamp from rdb$database
-- returns e.g. 2008-08-13 14:20:19.6170
select localtimestamp(2) from rdb$database
-- returns e.g. 2008-08-13 14:20:23.1200
12.13. NEW
Record with the inserted or updated values of a row
PSQL — triggers only,
DSQL — RETURNING
clause of UPDATE
, UPDATE OR INSERT
and MERGE
Record type
NEW.column_name
Parameter | Description |
---|---|
column_name |
Column name to access |
NEW
contains the new version of a database record that has just been inserted or updated.
NEW
is read-only in AFTER
triggers.
In multi-action triggers NEW
is always available.
However, if the trigger is fired by a DELETE
, there will be no new version of the record.
In that situation, reading from NEW
will always return NULL
;
writing to it will cause a runtime exception.
12.14. 'NOW'
Current date and/or time in cast context
CHAR(3)
, or depends on explicit CAST
'NOW'
is not a variable, but a string literal or datetime mnemonic.
It is, however, special in the sense that when you CAST()
it to a datetime type, you will get the current date and/or time.
If the datetime type has a time component, the precision is 3 decimals, i.e. milliseconds.
'NOW'
is case-insensitive, and the engine ignores leading or trailing spaces when casting.
'NOW'
always returns the actual date/time, even in PSQL modules, where CURRENT_DATE
, CURRENT_TIME
and CURRENT_TIMESTAMP
return the same value throughout the duration of the outermost routine.
This makes 'NOW'
useful for measuring time intervals in triggers, procedures and executable blocks.
Except in the situation mentioned above, reading CURRENT_DATE
, CURRENT_TIME
and CURRENT_TIMESTAMP
is generally preferable to casting 'NOW'
.
Be aware though that CURRENT_TIME
defaults to seconds precision; to get milliseconds precision, use CURRENT_TIME(3)
.
Firebird 3.0 and earlier allowed the use of |
select 'Now' from rdb$database
-- returns 'Now'
select cast('Now' as date) from rdb$database
-- returns e.g. 2008-08-13
select cast('now' as time) from rdb$database
-- returns e.g. 14:20:19.6170
select cast('NOW' as timestamp) from rdb$database
-- returns e.g. 2008-08-13 14:20:19.6170
12.15. OLD
Record with the initial values of a row before update or delete
PSQL — triggers only,
DSQL — RETURNING
clause of UPDATE
, UPDATE OR INSERT
and MERGE
Record type
OLD.column_name
Parameter | Description |
---|---|
column_name |
Column name to access |
OLD
contains the existing version of a database record just before a deletion or update.
The OLD
record is read-only.
In multi-action triggers OLD
is always available.
However, if the trigger is fired by an INSERT
, there is obviously no pre-existing version of the record.
In that situation, reading from OLD
will always return NULL
.
12.16. RESETTING
Indicates if the trigger fired during a session reset
PSQL — triggers only
BOOLEAN
RESETTING
Its value is TRUE
if session reset is in progress and FALSE
otherwise.
Intended for use in ON DISCONNECT
and ON CONNECT
database triggers to detect an ALTER SESSION RESET
.
12.17. ROW_COUNT
Number of affected rows of the last executed statement
PSQL
INTEGER
ROW_COUNT
The ROW_COUNT
context variable contains the number of rows affected by the most recent DML statement (INSERT
, UPDATE
, DELETE
, SELECT
or FETCH
) in the current PSQL module.
SELECT
and FETCH
-
After a singleton
SELECT
,ROW_COUNT
is 1 if a data row was retrieved and 0 otherwise. -
In a
FOR SELECT
loop,ROW_COUNT
is incremented with every iteration (starting at 0 before the first). -
After a
FETCH
from a cursor,ROW_COUNT
is 1 if a data row was retrieved and 0 otherwise. Fetching more records from the same cursor does not incrementROW_COUNT
beyond 1.
|
update Figures set Number = 0 where id = :id;
if (row_count = 0) then
insert into Figures (id, Number) values (:id, 0);
12.18. SQLCODE
SQLCODE of the Firebird error in a WHEN … DO
block
PSQL
2.5.1
INTEGER
SQLCODE
In a “WHEN … DO
” error handling block, the SQLCODE
context variable contains the numeric value of the current SQL error code.
SQLCODE
is non-zero in WHEN … DO
blocks, if the current error has a SQL error code.
Outside error handlers, SQLCODE
is always 0.
Outside PSQL, it doesn’t exist at all.
|
when any
do
begin
if (sqlcode <> 0) then
Msg = 'An SQL error occurred!';
else
Msg = 'Something bad happened!';
exception ex_custom Msg;
end
12.19. SQLSTATE
SQLSTATE code of the Firebird error in a WHEN … DO
block
PSQL
CHAR(5)
SQLSTATE
In a “WHEN … DO
” error handler, the SQLSTATE
context variable contains the 5-character, SQL-compliant status code of the current error.
Outside error handlers, SQLSTATE
is always '00000'
.
Outside PSQL, it is not available at all.
SQLSTATE
is destined to replace SQLCODE
.
The latter is now deprecated in Firebird and will disappear in a future version.
Each SQLSTATE
code is the concatenation of a 2-character class and a 3-character subclass.
Classes 00 (successful completion), 01 (warning) and 02 (no data) represent completion conditions.
Every status code outside these classes is an exception.
Because classes 00, 01 and 02 don’t raise an error, they won’t ever show up in the SQLSTATE
variable.
For a complete listing of SQLSTATE
codes, consult the SQLSTATE Codes and Message Texts section in Appendix B, Exception Codes and Messages.
when any
do
begin
Msg = case sqlstate
when '22003' then 'Numeric value out of range.'
when '22012' then 'Division by zero.'
when '23000' then 'Integrity constraint violation.'
else 'Something bad happened! SQLSTATE = ' || sqlstate
end;
exception ex_custom Msg;
end
12.20. 'TODAY'
Current date in cast context
CHAR(5)
, or depends on explicit CAST
'TODAY'
is not a variable, but a string literal or date mnemonic.
It is, however, special in the sense that when you CAST()
it to a date/time type, you will get the current date.
If the target datetime type has a time component, it will be set to zero.
'TODAY'
is case-insensitive, and the engine ignores leading or trailing spaces when casting.
'TODAY'
always returns the actual date, even in PSQL modules, where CURRENT_DATE
, CURRENT_TIME
and CURRENT_TIMESTAMP
return the same value throughout the duration of the outermost routine.
This makes 'TODAY'
useful for measuring time intervals in triggers, procedures and executable blocks (at least if your procedures are running for days).
Except in the situation mentioned above, reading CURRENT_DATE
, is generally preferable to casting 'TODAY'
.
When cast to a TIMESTAMP WITH TIME ZONE
, the time reflected will be 00:00:00 in UTC rebased to the session time zone.
Firebird 3.0 and earlier allowed the use of |
select 'Today' from rdb$database
-- returns 'Today'
select cast('Today' as date) from rdb$database
-- returns e.g. 2011-10-03
select cast('TODAY' as timestamp) from rdb$database
-- returns e.g. 2011-10-03 00:00:00.0000
12.21. 'TOMORROW'
Tomorrow’s date in cast context
CHAR(8)
, or depends on explicit CAST
'TOMORROW'
is not a variable, but a string literal.
It is, however, special in the sense that when you CAST()
it to a date/time type, you will get the date of the next day.
See also 'TODAY'
.
select 'Tomorrow' from rdb$database
-- returns 'Tomorrow'
select cast('Tomorrow' as date) from rdb$database
-- returns e.g. 2011-10-04
select cast('TOMORROW' as timestamp) from rdb$database
-- returns e.g. 2011-10-04 00:00:00.0000
12.22. UPDATING
Indicates if the trigger fired for an UPDATE
operation
PSQL — triggers only
BOOLEAN
UPDATING
Intended for use in multi-action triggers.
if (inserting or updating) then
begin
if (new.serial_num is null) then
new.serial_num = gen_id(gen_serials, 1);
end
12.23. 'YESTERDAY'
Yesterday’s date in cast context
CHAR(9)
, or depends on explicit CAST
'YESTERDAY'
is not a variable, but a string literal.
It is, however, special in the sense that when you CAST()
it to a date/time type, you will get the date of the day before.
See also 'TODAY'
.
select 'Yesterday' from rdb$database
-- returns 'Yesterday'
select cast('Yesterday as date) from rdb$database
-- returns e.g. 2011-10-02
select cast('YESTERDAY' as timestamp) from rdb$database
-- returns e.g. 2011-10-02 00:00:00.0000
12.24. USER
Name of the user of the current connection
VARCHAR(63)
USER
USER
is equivalent to (or, alias of) CURRENT_USER
.
create trigger bi_customers for customers before insert as
begin
New.added_by = USER;
New.purchases = 0;
end
13. Transaction Control
Almost all operations in Firebird occur in the context of a transaction. Units of work are isolated between a start point and end point. Changes to data remain reversible until the moment the client application instructs the server to commit them.
Unless explicitly mentioned otherwise in an “Available in” section, transaction control statements are available in DSQL. Availability in ESQL is — bar some exceptions — not tracked by this Language Reference. Transaction control statements are not available in PSQL.
13.1. Transaction Statements
Firebird has a small lexicon of SQL statements to start, manage, commit and reverse (roll back) the transactions that form the boundaries of most database tasks:
- SET TRANSACTION
-
configures and starts a transaction
- COMMIT
-
signals the end of a unit of work and writes changes permanently to the database
- ROLLBACK
-
undoes the changes performed in the transaction or to a savepoint
- SAVEPOINT
-
marks a position in the log of work done, in case a partial rollback is needed
- RELEASE SAVEPOINT
-
erases a savepoint
13.1.1. SET TRANSACTION
Configures and starts a transaction
DSQL, ESQL
SET TRANSACTION [NAME tr_name] [<tr_option> ...] <tr_option> ::= READ {ONLY | WRITE} | [NO] WAIT | [ISOLATION LEVEL] <isolation_level> | NO AUTO UNDO | RESTART REQUESTS | AUTO COMMIT | IGNORE LIMBO | LOCK TIMEOUT seconds | RESERVING <tables> | USING <dbhandles> <isolation_level> ::= SNAPSHOT [AT NUMBER snapshot_number] | SNAPSHOT TABLE [STABILITY] | READ {UNCOMMITED | COMMITTED} [<read-commited-opt>] <read-commited-opt> ::= [NO] RECORD_VERSION | READ CONSISTENCY <tables> ::= <table_spec> [, <table_spec> ...] <table_spec> ::= tablename [, tablename ...] [FOR [SHARED | PROTECTED] {READ | WRITE}] <dbhandles> ::= dbhandle [, dbhandle ...]
Parameter | Description |
---|---|
tr_name |
Transaction name. Available only in ESQL |
tr_option |
Optional transaction option.
Each option should be specified at most once, and some options are mutually exclusive (e.g. |
seconds |
The time in seconds for the statement to wait in case a conflict occurs.
Has to be greater than or equal to |
snapshot_number |
Snapshot number to use for this transaction |
tables |
The list of tables to reserve |
dbhandles |
The list of databases the database can access. Available only in ESQL |
table_spec |
Table reservation specification |
tablename |
The name of the table to reserve |
dbhandle |
The handle of the database the transaction can access. Available only in ESQL |
Generally, only client applications start transactions. Exceptions are when the server starts an autonomous transaction, and transactions for certain background system threads/processes, such as sweeping.
A client application can start any number of concurrently running transactions. A single connection can have multiple concurrent active transactions (though not all drivers or access components support this). A limit does exist, for the total number of transactions in all client applications working with one particular database from the moment the database was restored from its gbak backup or from the moment the database was created originally. The limit is 248 — 281,474,976,710,656.
All clauses in the SET TRANSACTION
statement are optional.
If the statement starting a transaction has no clauses specified, the transaction will be started with default values for access mode, lock resolution mode and isolation level, which are:
SET TRANSACTION
READ WRITE
WAIT
ISOLATION LEVEL SNAPSHOT;
Database drivers or access components may use different defaults for transactions started through their API. Check their documentation for details. |
The server assigns integer numbers to transactions sequentially.
Whenever a client starts any transaction, either explicitly defined or by default, the server sends the transaction ID to the client.
This number can be retrieved in SQL using the context variable CURRENT_TRANSACTION
.
Some database drivers — or their governing specifications — require that you configure and start transaction through API methods.
In that case, using Check the documentation of your driver for details. |
The NAME
and USING
clauses are only valid in ESQL.
Transaction Name
The optional NAME
attribute defines the name of a transaction.
Use of this attribute is available only in Embedded SQL (ESQL).
In ESQL applications, named transactions make it possible to have several transactions active simultaneously in one application.
If named transactions are used, a host-language variable with the same name must be declared and initialized for each named transaction.
This is a limitation that prevents dynamic specification of transaction names and thus rules out transaction naming in DSQL.
Transaction Parameters
The main parameters of a transaction are:
-
data access mode (
READ WRITE
,READ ONLY
) -
lock resolution mode (
WAIT
,NO WAIT
) with an optionalLOCK TIMEOUT
specification -
isolation level (
READ COMMITTED
,SNAPSHOT
,SNAPSHOT TABLE STABILITY
).The
READ UNCOMMITTED
isolation level is a synonym forREAD COMMITTED
, and is provided only for syntax compatibility. It provides identical semantics asREAD COMMITTED
, and does not allow you to view uncommitted changes of other transactions. -
a mechanism for reserving or releasing tables (the
RESERVING
clause)
Access Mode
The two database access modes for transactions are READ WRITE
and READ ONLY
.
-
If the access mode is
READ WRITE
, operations in the context of this transaction can be both read operations and data update operations. This is the default mode. -
If the access mode is
READ ONLY
, onlySELECT
operations can be executed in the context of this transaction. Any attempt to change data in the context of such a transaction will result in database exceptions. However, this does not apply to global temporary tables (GTT), which are allowed to be changed inREAD ONLY
transactions, see Global Temporary Tables (GTT) in Chapter 5, Data Definition (DDL) Statements for details.
Lock Resolution Mode
When several client processes work with the same database, locks may occur when one process makes uncommitted changes in a table row, or deletes a row, and another process tries to update or delete the same row. Such locks are called update conflicts.
Locks may occur in other situations when multiple transaction isolation levels are used.
The two lock resolution modes are WAIT
and NO WAIT
.
WAIT
ModeIn the WAIT
mode (the default mode), if a conflict occurs between two parallel processes executing concurrent data updates in the same database, a WAIT
transaction will wait till the other transaction has finished — by committing (COMMIT
) or rolling back (ROLLBACK
).
The client application with the WAIT
transaction will be put on hold until the conflict is resolved.
If a LOCK TIMEOUT
is specified for the WAIT
transaction, waiting will continue only for the number of seconds specified in this clause.
If the lock is unresolved at the end of the specified interval, the error message “Lock time-out on wait transaction” is returned to the client.
Lock resolution behaviour can vary a little, depending on the transaction isolation level.
NO WAIT
ModeIn the NO WAIT
mode, a transaction will immediately throw a database exception if a conflict occurs.
LOCK TIMEOUT
is a separate transaction option, but can only be used for WAIT
transactions.
Specifying LOCK TIMEOUT
with a NO WAIT
transaction will raise an error “invalid parameter in transaction parameter block -Option isc_tpb_lock_timeout is not valid if isc_tpb_nowait was used previously in TPB”
Isolation Level
Keeping the work of one database task separated from others is what isolation is about. Changes made by one statement become visible to all remaining statements executing within the same transaction, regardless of its isolation level. Changes that are in progress within other transactions remain invisible to the current transaction as long as they remain uncommitted. The isolation level and, sometimes, other attributes, determine how transactions will interact when another transaction wants to commit work.
The ISOLATION LEVEL
attribute defines the isolation level for the transaction being started.
It is the most significant transaction parameter for determining its behavior towards other concurrently running transactions.
The three isolation levels supported in Firebird are:
-
SNAPSHOT
-
SNAPSHOT TABLE STABILITY
-
READ COMMITTED
with three specifications (READ CONSISTENCY
,NO RECORD_VERSION
andRECORD_VERSION
)
SNAPSHOT
Isolation LevelSNAPSHOT
isolation level — the default level — allows the transaction to see only those changes that were committed before it was started.
Any committed changes made by concurrent transactions will not be seen in a SNAPSHOT
transaction while it is active.
The changes will become visible to a new transaction once the current transaction is either committed or rolled back, but not if it is only a roll back to a savepoint.
The SNAPSHOT
isolation level is also known as “concurrency”.
Autonomous Transactions
Changes made by autonomous transactions are not seen in the context of the |
Using SNAPSHOT AT NUMBER snaphot_number
, a SNAPSHOT
transaction can be started sharing the snapshot of another transaction.
With this feature it’s possible to create parallel processes (using different attachments) reading consistent data from a database.
For example, a backup process may create multiple threads reading data from the database in parallel, or a web service may dispatch distributed sub-services doing processing in parallel.
Alternatively, this feature can also be used via the API, using Transaction Parameter Buffer item isc_tpb_at_snapshot_number
.
The snapshot_number from an active transaction can be obtained with RDB$GET_CONTEXT('SYSTEM', 'SNAPSHOT_NUMBER')
in SQL or using the transaction information API call with fb_info_tra_snapshot_number
information tag.
The snapshot_number passed to the new transaction must be a snapshot of a currently active transaction.
To share a stable view between transactions, the other transaction also needs to have isolation level |
SET TRANSACTION SNAPSHOT AT NUMBER 12345;
SNAPSHOT TABLE STABILITY
Isolation LevelThe SNAPSHOT TABLE STABILITY
— or SNAPSHOT TABLE
— isolation level is the most restrictive.
As in SNAPSHOT
, a transaction in SNAPSHOT TABLE STABILITY
isolation sees only those changes that were committed before the current transaction was started.
After a SNAPSHOT TABLE STABILITY
is started, no other transactions can make any changes to any table in the database that has changes pending for this transaction.
Other transactions can read other data, but any attempt at inserting, updating or deleting by a parallel process will cause conflict exceptions.
The RESERVING
clause can be used to allow other transactions to change data in some tables.
If any other transaction has an uncommitted change pending in any (non-SHARED
) table listed in the RESERVING
clause, trying to start a SNAPSHOT TABLE STABILITY
transaction will result in an indefinite wait (default or explicit WAIT
), or an exception (NO WAIT
or after expiration of the LOCK TIMEOUT
).
The SNAPSHOT TABLE STABILITY
isolation level is also known as “consistency”.
READ COMMITTED
Isolation LevelThe READ COMMITTED
isolation level allows all data changes that other transactions have committed since it started to be seen immediately by the uncommitted current transaction.
Uncommitted changes are not visible to a READ COMMITTED
transaction.
To retrieve the updated list of rows in the table you are interested in — “refresh” — the SELECT
statement needs to be executed again, whilst still in the uncommitted READ COMMITTED
transaction.
READ COMMITTED
One of three modifying parameters can be specified for READ COMMITTED
transactions, depending on the kind of conflict resolution desired: READ CONSISTENCY
, RECORD_VERSION
or NO RECORD_VERSION
.
When the ReadConsistency
setting is set to 1
in firebird.conf
(the default) or in databases.conf
, these variants are effectively ignored and behave as READ CONSISTENCY
.
Otherwise, these variants are mutually exclusive.
-
NO RECORD_VERSION
(the default ifReadConsistency = 0
) is a kind of two-phase locking mechanism: it will make the transaction unable to write to any row that has an update pending from another transaction.-
with
NO WAIT
specified, it will throw a lock conflict error immediately -
with
WAIT
specified, it will wait until the other transaction is either committed or rolled back. If the other transaction is rolled back, or if it is committed and its transaction ID is older than the current transaction’s ID, then the current transaction’s change is allowed. A lock conflict error is returned if the other transaction was committed and its ID was newer than that of the current transaction.
-
-
With
RECORD_VERSION
specified, the transaction reads the latest committed version of the row, regardless of other pending versions of the row. The lock resolution strategy (WAIT
orNO WAIT
) does not affect the behavior of the transaction at its start in any way. -
With
READ CONSISTENCY
specified (orReadConsistency = 1
), the execution of a statement obtains a snapshot of the database to ensure a consistent read at the statement-level of the transactions committed when execution started.The other two variants can result in statement-level inconsistent reads as they may read some but not all changes of a concurrent transaction if that transaction commits during statement execution. For example, a
SELECT COUNT(*)
could read some, but not all inserted records of another transaction if the commit of that transaction occurs while the statement is reading records.This statement-level snapshot is obtained for the execution of a top-level statement, nested statements (triggers, stored procedures and functions, dynamics statements, etc.) use the statement-level snapshot created for the top-level statement.
Obtaining a snapshot for |
Setting |
NO AUTO UNDO
The NO AUTO UNDO
option affects the handling of record versions (garbage) produced by the transaction in the event of rollback.
With NO AUTO UNDO
flagged, the ROLLBACK
statement marks the transaction as rolled back without deleting the record versions created in the transaction.
They are left to be mopped up later by garbage collection.
NO AUTO UNDO
might be useful when a lot of separate statements are executed that change data in conditions where the transaction is likely to be committed successfully most of the time.
The NO AUTO UNDO
option is ignored for transactions where no changes are made.
RESTART REQUESTS
According to the Firebird sources, this will
Restart all requests in the current attachment to utilize the passed transaction.
The exact semantics and effects of this clause are not clear, and we recommend you do not use this clause.
AUTO COMMIT
Specifying AUTO COMMIT
enables auto-commit mode for the transaction.
In auto-commit mode, Firebird will internally execute the equivalent of COMMIT RETAIN
after each statement execution.
This is not a generally useful auto-commit mode;
the same transaction context is retained until the transaction is ended through a commit or rollback.
In other words, when you use For |
IGNORE LIMBO
This flag is used to signal that records created by limbo transactions are to be ignored. Transactions are left “in limbo” if the second stage of a two-phase commit fails.
Historical Note
|
RESERVING
The RESERVING
clause in the SET TRANSACTION
statement reserves tables specified in the table list.
Reserving a table prevents other transactions from making changes in them or even, with the inclusion of certain parameters, from reading data from them while this transaction is running.
A RESERVING
clause can also be used to specify a list of tables that can be changed by other transactions, even if the transaction is started with the SNAPSHOT TABLE STABILITY
isolation level.
One RESERVING
clause is used to specify as many reserved tables as required.
RESERVING
ClauseIf one of the keywords SHARED
or PROTECTED
is omitted, SHARED
is assumed.
If the whole FOR
clause is omitted, FOR SHARED READ
is assumed.
The names and compatibility of the four access options for reserving tables are not obvious.
|
SHARED READ |
SHARED WRITE |
PROTECTED READ |
PROTECTED WRITE |
SHARED READ |
Yes |
Yes |
Yes |
Yes |
SHARED WRITE |
Yes |
Yes |
No |
No |
PROTECTED READ |
Yes |
No |
Yes |
No |
PROTECTED WRITE |
Yes |
No |
No |
No |
The combinations of these RESERVING
clause flags for concurrent access depend on the isolation levels of the concurrent transactions:
-
SNAPSHOT
isolation-
Concurrent
SNAPSHOT
transactions withSHARED READ
do not affect one other’s access -
A concurrent mix of
SNAPSHOT
andREAD COMMITTED
transactions withSHARED WRITE
do not affect one another’s access, but they block transactions withSNAPSHOT TABLE STABILITY
isolation from either reading from or writing to the specified table(s) -
Concurrent transactions with any isolation level and
PROTECTED READ
can only read data from the reserved tables. Any attempt to write to them will cause an exception -
With
PROTECTED WRITE
, concurrent transactions withSNAPSHOT
andREAD COMMITTED
isolation cannot write to the specified tables. Transactions withSNAPSHOT TABLE STABILITY
isolation cannot read from or write to the reserved tables at all.
-
-
SNAPSHOT TABLE STABILITY
isolation-
All concurrent transactions with
SHARED READ
, regardless of their isolation levels, can read from or write (if inREAD WRITE
mode) to the reserved tables -
Concurrent transactions with
SNAPSHOT
andREAD COMMITTED
isolation levels andSHARED WRITE
can read data from and write (if inREAD WRITE
mode) to the specified tables but concurrent access to those tables from transactions withSNAPSHOT TABLE STABILITY
is blocked whilst these transactions are active -
Concurrent transactions with any isolation level and
PROTECTED READ
can only read from the reserved tables -
With
PROTECTED WRITE
, concurrentSNAPSHOT
andREAD COMMITTED
transactions can read from but not write to the reserved tables. Access by transactions with theSNAPSHOT TABLE STABILITY
isolation level is blocked.
-
-
READ COMMITTED
isolation-
With
SHARED READ
, all concurrent transactions with any isolation level can both read from and write (if inREAD WRITE
mode) to the reserved tables -
SHARED WRITE
allows all transactions inSNAPSHOT
andREAD COMMITTED
isolation to read from and write (if inREAD WRITE
mode) to the specified tables and blocks access from transactions withSNAPSHOT TABLE STABILITY
isolation -
With
PROTECTED READ
, concurrent transactions with any isolation level can only read from the reserved tables -
With
PROTECTED WRITE
, concurrent transactions inSNAPSHOT
andREAD COMMITTED
isolation can read from but not write to the specified tables. Access from transactions inSNAPSHOT TABLE STABILITY
isolation is blocked.
-
In Embedded SQL, the USING
clause can be used to conserve system resources by
limiting the number of databases a transaction can access.
USING
is mutually exclusive with RESERVING
.
A USING
clause in SET TRANSACTION
syntax is not supported in DSQL.
13.1.2. COMMIT
Commits a transaction
DSQL, ESQL
COMMIT [TRANSACTION tr_name] [WORK] [RETAIN [SNAPSHOT] | RELEASE];
Parameter | Description |
---|---|
tr_name |
Transaction name. Available only in ESQL |
The COMMIT
statement commits all work carried out in the context of this transaction (inserts, updates, deletes, selects, execution of procedures).
New record versions become available to other transactions and, unless the RETAIN
clause is employed, all server resources allocated to its work are released.
If any conflicts or other errors occur in the database during the process of committing the transaction, the transaction is not committed, and the reasons are passed back to the user application for handling, and the opportunity to attempt another commit or to roll the transaction back.
The TRANSACTION
and RELEASE
clauses are only valid in ESQL.
COMMIT
Options
-
The optional
TRANSACTION tr_name
clause, available only in Embedded SQL, specifies the name of the transaction to be committed. With noTRANSACTION
clause,COMMIT
is applied to the default transaction.In ESQL applications, named transactions make it possible to have several transactions active simultaneously in one application. If named transactions are used, a host-language variable with the same name must be declared and initialized for each named transaction. This is a limitation that prevents dynamic specification of transaction names and thus, rules out transaction naming in DSQL.
-
The keyword
RELEASE
is available only in Embedded SQL and enables disconnection from all databases after the transaction is committed.RELEASE
is retained in Firebird only for compatibility with legacy versions of InterBase. It has been superseded in ESQL by theDISCONNECT
statement. -
The
RETAIN [SNAPSHOT]
clause is used for the “soft” commit, variously referred to amongst host languages and their practitioners asCOMMIT WITH RETAIN
, “CommitRetaining”, “warm commit”, et al. The transaction is committed, but some server resources are retained and a new transaction is restarted transparently with the same Transaction ID. The state of row caches and cursors remains as it was before the soft commit.For soft-committed transactions whose isolation level is
SNAPSHOT
orSNAPSHOT TABLE STABILITY
, the view of database state does not update to reflect changes by other transactions, and the user of the application instance continues to have the same view as when the original transaction started. Changes made during the life of the retained transaction are visible to that transaction, of course.
Prefer commit to rollback when reading
Use of the |
13.1.3. ROLLBACK
Rolls back a transaction or to a savepoint
DSQL, ESQL
ROLLBACK [TRANSACTION tr_name] [WORK] [RETAIN [SNAPSHOT] | RELEASE] | ROLLBACK [WORK] TO [SAVEPOINT] sp_name
Parameter | Description |
---|---|
tr_name |
Transaction name. Available only in ESQL |
sp_name |
Savepoint name. Available only in DSQL |
The ROLLBACK
statement rolls back all work carried out in the context of this transaction (inserts, updates, deletes, selects, execution of procedures).
ROLLBACK
never fails and, thus, never causes exceptions.
Unless the RETAIN
clause is employed, all server resources allocated to the work of the transaction are released.
The TRANSACTION
and RELEASE
clauses are only valid in ESQL.
The ROLLBACK TO SAVEPOINT
statement is not available in ESQL.
ROLLBACK
Options
-
The optional
TRANSACTION tr_name
clause, available only in Embedded SQL, specifies the name of the transaction to be committed. With noTRANSACTION
clause,ROLLBACK
is applied to the default transaction.In ESQL applications, named transactions make it possible to have several transactions active simultaneously in one application. If named transactions are used, a host-language variable with the same name must be declared and initialized for each named transaction. This is a limitation that prevents dynamic specification of transaction names and thus, rules out transaction naming in DSQL.
-
The keyword
RETAIN
keyword specifies that, although all work of the transaction is to be rolled back, the transaction context is to be retained. Some server resources are retained, and the transaction is restarted transparently with the same Transaction ID. The state of row caches and cursors is kept as it was before the “soft” rollback.For transactions whose isolation level is
SNAPSHOT
orSNAPSHOT TABLE STABILITY
, the view of database state is not updated by the soft rollback to reflect changes by other transactions. The user of the application instance continues to have the same view as when the transaction started originally. Changes that were made and soft-committed during the life of the retained transaction are visible to that transaction, of course.
ROLLBACK TO SAVEPOINT
The ROLLBACK TO SAVEPOINT
statement specifies the name of a savepoint to which changes are to be rolled back.
The effect is to roll back all changes made within the transaction, from the specified savepoint forward until the point when ROLLBACK TO SAVEPOINT
is requested.
ROLLBACK TO SAVEPOINT
performs the following operations:
-
Any database mutations performed since the savepoint was created are undone. User variables set with
RDB$SET_CONTEXT()
remain unchanged. -
Any savepoints that were created after the one named are destroyed. Savepoints earlier than the one named are preserved, along with the named savepoint itself. Repeated rollbacks to the same savepoint are thus allowed.
-
All implicit and explicit record locks that were acquired since the savepoint are released. Other transactions that have requested access to rows locked after the savepoint are not notified and will continue to wait until the transaction is committed or rolled back. Other transactions that have not already requested the rows can request and access the unlocked rows immediately.
13.1.4. SAVEPOINT
Creates a savepoint
SAVEPOINT sp_name
Parameter | Description |
---|---|
sp_name |
Savepoint name. Available only in DSQL |
The SAVEPOINT
statement creates an SQL-compliant savepoint that acts as a marker in the “stack” of data activities within a transaction.
Subsequently, the tasks performed in the “stack” can be undone back to this savepoint, leaving the earlier work and older savepoints untouched.
Savepoints are sometimes called “nested transactions”.
If a savepoint already exists with the same name as the name supplied for the new one, the existing savepoint is released, and a new one is created using the supplied name.
To roll changes back to the savepoint, the statement ROLLBACK TO SAVEPOINT
is used.
Memory Considerations
The internal mechanism beneath savepoints can consume large amounts of memory, especially if the same rows receive multiple updates in one transaction.
When a savepoint is no longer needed, but the transaction still has work to do, a |
CREATE TABLE TEST (ID INTEGER);
COMMIT;
INSERT INTO TEST VALUES (1);
COMMIT;
INSERT INTO TEST VALUES (2);
SAVEPOINT Y;
DELETE FROM TEST;
SELECT * FROM TEST; -- returns no rows
ROLLBACK TO Y;
SELECT * FROM TEST; -- returns two rows
ROLLBACK;
SELECT * FROM TEST; -- returns one row
13.1.5. RELEASE SAVEPOINT
Releases a savepoint
RELEASE SAVEPOINT sp_name [ONLY]
Parameter | Description |
---|---|
sp_name |
Savepoint name. Available only in DSQL |
The statement RELEASE SAVEPOINT
erases a named savepoint, freeing up all the resources it encompasses.
By default, all the savepoints created after the named savepoint are released as well.
The qualifier ONLY
directs the engine to release only the named savepoint.
13.1.6. Internal Savepoints
By default, the engine uses an automatic transaction-level system savepoint to perform transaction rollback.
When a ROLLBACK
statement is issued, all changes performed in this transaction are backed out via a transaction-level savepoint, and the transaction is then committed.
This logic reduces the amount of garbage collection caused by rolled back transactions.
When the volume of changes performed under a transaction-level savepoint is getting large (~50000 records affected), the engine releases the transaction-level savepoint and uses the Transaction Inventory Page (TIP) as a mechanism to roll back the transaction if needed.
If you expect the volume of changes in your transaction to be large, you can specify the |
13.1.7. Savepoints and PSQL
Transaction control statements are not allowed in PSQL, as that would break the atomicity of the statement that calls the procedure. However, Firebird does support the raising and handling of exceptions in PSQL, so that actions performed in stored procedures and triggers can be selectively undone without the entire procedure failing.
Internally, automatic savepoints are used to:
-
undo all actions in the
BEGIN…END
block where an exception occurs -
undo all actions performed by the procedure or trigger or, in a selectable procedure, all actions performed since the last
SUSPEND
, when execution terminates prematurely because of an uncaught error or exception
Each PSQL exception handling block is also bounded by automatic system savepoints.
A BEGIN…END
block does not itself create an automatic savepoint.
A savepoint is created only in blocks that contain a <<fblangref50-psql-when,WHEN
statement> for handling exceptions.
14. Security
Databases must be secure and so must the data stored in them. Firebird provides three levels of data security: user authentication at the server level, SQL privileges within databases, and — optionally — database encryption. This chapter describes how to manage security at these three levels.
There is also a fourth level of data security: wire protocol encryption, which encrypts data in transit between client and server. Wire protocol encryption is out of scope for this Language Reference. |
14.1. User Authentication
The security of the entire database depends on identifying a user and verifying its authority, a procedure known as authentication.
User authentication can be performed in several ways, depending on the setting of the AuthServer
parameter in the firebird.conf
configuration file.
This parameter contains the list of authentication plugins that can be used when connecting to the server.
If the first plugin fails when authenticating, then the client can proceed with the next plugin, etc.
When no plugin could authenticate the user, the user receives an error message.
The information about users authorised to access a specific Firebird server is stored in a special security database named security5.fdb
.
Each record in security5.fdb
is a user account for one user.
For each database, the security database can be overridden in the databases.conf
file (parameter SecurityDatabase
).
Any database can be a security database, even for that database itself.
A username, with a maximum length of 63 characters, is an identifier, following the normal rules for identifiers (unquoted case-insensitive, double-quoted case-sensitive).
For backwards compatibility, some statements (e.g. isqls CONNECT
) accept usernames enclosed in single quotes, which will behave as normal, unquoted identifiers.
The maximum password length depends on the user manager plugin (parameter UserManager
, in firebird.conf
or databases.conf
).
Passwords are case-sensitive.
The default user manager is the first plugin in the UserManager
list, but this can be overridden in the SQL user management statements.
For the Srp
plugin, the maximum password length is 255 characters, for an effective length of 20 bytes (see also Why is the effective password length of SRP 20 bytes?).
For the Legacy_UserManager
plugin only the first eight bytes of a password are significant;
whilst it is valid to enter a password longer than eight bytes for Legacy_UserManager
, any subsequent characters are ignored.
The embedded version of the server does not use authentication; for embedded, the filesystem permissions to open the database file are used as authorization to access the database. However, the username, and — if necessary — the role, must be specified in the connection parameters, as they control access to database objects.
SYSDBA or the owner of the database have unrestricted access to all objects of the database.
Users with the RDB$ADMIN
role have similar unrestricted access if they specify that role when connecting or with SET ROLE
.
14.1.1. Specially Privileged Users
In Firebird, the SYSDBA account is a “superuser” that exists beyond any security restrictions.
It has complete access to all objects in all regular databases on the server, and full read/write access to the accounts in the security database security5.fdb
.
No user has remote access to the metadata of the security database.
For Srp
, the SYSDBA account does not exist by default;
it will need to be created using an embedded connection.
For Legacy_Auth
, the default SYSDBA password on Windows and macOS is “masterkey” — or “masterke”, to be exact, because of the 8-character length limit.
The default password “masterkey” is known across the universe. It should be changed as soon as the Firebird server installation is complete. |
Other users can acquire elevated privileges in several ways, some of which depend on the operating system platform. These are discussed in the sections that follow and are summarised in Administrators and Fine-grained System Privileges.
POSIX Hosts
On POSIX systems, including macOS, the POSIX username will be used as the Firebird Embedded username if username is not explicitly specified.
The SYSDBA
User on POSIX
On POSIX hosts, other than macOS, the SYSDBA user does not have a default password.
If the full installation is done using the standard scripts, a one-off password will be created and stored in a text file in the same directory as security5.fdb
, commonly /opt/firebird/
.
The name of the password file is SYSDBA.password
.
In an installation performed by a distribution-specific installer, the location of the security database and the password file may be different from the standard one. |
Windows Hosts
On the Windows Server operating systems, operating system accounts can be used.
Windows authentication (also known as “trusted authentication”) can be enabled by including the Win_Sspi
plugin in the AuthServer
list in firebird.conf
.
The plugin must also be present in the AuthClient
setting at the client-side.
Windows operating system administrators are not automatically granted SYSDBA privileges when connecting to a database.
To make that happen, the internally-created role RDB$ADMIN
must be altered by SYSDBA or the database owner, to enable it.
For details, refer to the later section entitled AUTO ADMIN MAPPING
.
Prior to Firebird 3.0, with trusted authentication enabled, users who passed the default checks were automatically mapped to |
The Database Owner
The “owner” of a database is either the user who was CURRENT_USER
at the time of creation (or restore) of the database or, if the USER
parameter was supplied in the CREATE DATABASE
statement, the specified user.
“Owner” is not a username.
The user who is the owner of a database has full administrator privileges with respect to that database, including the right to drop it, to restore it from a backup and to enable or disable the AUTO ADMIN MAPPING
capability.
Users with the USER_MANAGEMENT
System Privilege
A user with the USER_MANAGEMENT
system privilege in the security database can create, alter and drop users.
To receive the USER_MANAGEMENT
privilege, the security database must have a role with that privilege:
create role MANAGE_USERS
set system privileges to USER_MANAGEMENT;
There are two options for the user to exercise these privileges:
-
Grant the role as a default role. The user will always be able to create, alter or drop users.
grant default MANAGE_USERS to user ALEX;
-
Grant the role as a normal role. The user will only be able to create, alter or drop users when the role is specified explicitly on login or using
SET ROLE
.grant MANAGE_USERS to user ALEX;
If the security database is a different database than the user connects to — which is usually the case when using
security5.fdb
— then a role with the same name must also exist and be granted to the user in that database for the user to be able to activate the role. The role in the other database does not need any system privileges or other privileges.
The USER_MANAGEMENT
system privilege does not allow a user to grant or revoke the admin role.
This requires the RDB$ADMIN
role.
14.1.2. RDB$ADMIN
Role
The internally-created role RDB$ADMIN
is present in all databases.
Assigning the RDB$ADMIN
role to a regular user in a database grants that user the privileges of the SYSDBA
, in that database only.
The elevated privileges take effect when the user is logged in to that regular database under the RDB$ADMIN
role, and gives full control over all objects in that database.
Being granted the RDB$ADMIN
role in the security database confers the authority to create, alter and drop user accounts.
In both cases, the user with the elevated privileges can assign RDB$ADMIN
role to any other user.
In other words, specifying WITH ADMIN OPTION
is unnecessary because it is built into the role.
Granting the RDB$ADMIN
Role in the Security Database
Since nobody — not even SYSDBA — can connect to the security database remotely, the GRANT
and REVOKE
statements are of no use for this task.
Instead, the RDB$ADMIN
role is granted and revoked using the SQL statements for user management:
CREATE USER new_user PASSWORD 'password' GRANT ADMIN ROLE; ALTER USER existing_user GRANT ADMIN ROLE; ALTER USER existing_user REVOKE ADMIN ROLE;
Parameter | Description |
---|---|
new_user |
Name for the new user |
existing_user |
Name of an existing user |
password |
User password |
The grantor must be logged in as an administrator.
Doing the Same Task Using gsec
With Firebird 3.0, gsec was deprecated. It is recommended to use the SQL user management statements instead. |
An alternative is to use gsec with the -admin
parameter to store the RDB$ADMIN
attribute on the user’s record:
gsec -add new_user -pw password -admin yes gsec -mo existing_user -admin yes gsec -mo existing_user -admin no
Depending on the administrative status of the current user, more parameters may be needed when invoking gsec, e.g. -user
and -pass
, -role
, or -trusted
.
Using the RDB$ADMIN
Role in the Security Database
To manage user accounts through SQL, the user must have the RDB$ADMIN
role in the security database.
No user can connect to the security database remotely, so the solution is that the user connects to a regular database.
From there, they can submit any SQL user management command.
Contrary to Firebird 3.0 or earlier, the user does not need to specify the RDB$ADMIN
role on connect, nor do they need to have the RDB$ADMIN
role in the database used to connect.
Granting the RDB$ADMIN
Role in a Regular Database
In a regular database, the RDB$ADMIN
role is granted and revoked with the usual syntax for granting and revoking roles:
GRANT [DEFAULT] RDB$ADMIN TO username REVOKE [DEFAULT] RDB$ADMIN FROM username
Parameter | Description |
---|---|
username |
Name of the user |
To grant and revoke the RDB$ADMIN
role, the grantor must be logged in as an administrator.
AUTO ADMIN MAPPING
Windows Administrators are not automatically granted RDB$ADMIN
privileges when connecting to a database (when Win_Sspi
is enabled).
The AUTO ADMIN MAPPING
switch determines whether Administrators have automatic RDB$ADMIN
rights, on a database-by-database basis.
By default, when a database is created, it is disabled.
If AUTO ADMIN MAPPING
is enabled in the database, it will take effect whenever a Windows Administrator connects:
-
using
Win_Sspi
authentication, and -
without specifying any role
After a successful “auto admin” connection, the current role is set to RDB$ADMIN
.
If an explicit role was specified on connect, the RDB$ADMIN
role can be assumed later in the session using SET TRUSTED ROLE
.
Auto Admin Mapping in Regular Databases
To enable and disable automatic mapping in a regular database:
ALTER ROLE RDB$ADMIN
SET AUTO ADMIN MAPPING; -- enable it
ALTER ROLE RDB$ADMIN
DROP AUTO ADMIN MAPPING; -- disable it
Either statement must be issued by a user with sufficient rights, that is:
-
The database owner
-
A user with the
ALTER ANY ROLE
privilege
The statement
is a simplified form of a
Accordingly, the statement
is equivalent to the statement
For details, see Mapping of Users to Objects |
In a regular database, the status of AUTO ADMIN MAPPING
is checked only at connect time.
If an Administrator has the RDB$ADMIN
role because auto-mapping was on when they logged in, they will keep that role for the duration of the session, even if they or someone else turns off the mapping in the meantime.
Likewise, switching on AUTO ADMIN MAPPING
will not change the current role to RDB$ADMIN
for Administrators who were already connected.
Auto Admin Mapping in the Security Database
The ALTER ROLE RDB$ADMIN
statement cannot enable or disable AUTO ADMIN MAPPING
in the security database.
However, you can create a global mapping for the predefined group DOMAIN_ANY_RID_ADMINS
to the role RDB$ADMIN
in the following way:
CREATE GLOBAL MAPPING WIN_ADMINS
USING PLUGIN WIN_SSPI
FROM Predefined_Group DOMAIN_ANY_RID_ADMINS
TO ROLE RDB$ADMIN;
Additionally, you can use gsec:
gsec -mapping set gsec -mapping drop
Depending on the administrative status of the current user, more parameters may be needed when invoking gsec, e.g. -user
and -pass
, -role
, or -trusted
.
Only SYSDBA can enable AUTO ADMIN MAPPING
if it is disabled, but any administrator can turn it off.
When turning off AUTO ADMIN MAPPING
in gsec, the user turns off the mechanism itself which gave them access, and thus they would not be able to re-enable AUTO ADMIN MAPPING
.
Even in an interactive gsec session, the new flag setting takes effect immediately.
14.1.3. Administrators
An administrator is a user that has sufficient rights to read, write to, create, alter or delete any object in a database to which that user’s administrator status applies. The table summarises how “superuser” privileges are enabled in the various Firebird security contexts.
User | RDB$ADMIN Role | Comments |
---|---|---|
|
Auto |
Exists automatically at server level. Has full privileges to all objects in all databases. Can create, alter and drop users, but has no direct remote access to the security database |
root user on POSIX |
Auto |
Exactly like |
Superuser on POSIX |
Auto |
Exactly like |
Windows Administrator |
Set as |
Exactly like
|
Database owner |
Auto |
Like |
Regular user |
Must be previously granted; must be supplied at login or have been granted as a default role |
Like |
POSIX OS user |
Must be previously granted; must be supplied at login or have been granted as a default role |
Like |
Windows user |
Must be previously granted; must be supplied at login |
Like |
14.1.4. Fine-grained System Privileges
In addition to granting users full administrative privileges, system privileges make it possible to grant regular users a subset of administrative privileges that have historically been limited to SYSDBA and administrators only. For example:
-
Run utilities such as gbak, gfix, nbackup and so on
-
Shut down a database and bring it online
-
Trace other users' attachments
-
Access the monitoring tables
The implementation defines a set of system privileges, analogous to object privileges, from which lists of privileged tasks can be assigned to roles.
It is also possible to grant normal privileges to a system privilege, making the system privilege act like a special role type.
The system privileges are assigned through CREATE ROLE
and ALTER ROLE
.
Be aware that each system privilege provides a very thin level of control.
For some tasks it may be necessary to give the user more than one privilege to perform some task.
For example, add |
List of Valid System Privileges
The following table lists the names of the valid system privileges that can be granted to and revoked from roles.
USER_MANAGEMENT
|
Manage users (given in the security database) |
READ_RAW_PAGES
|
Read pages in raw format using |
CREATE_USER_TYPES
|
Add/change/delete non-system records in |
USE_NBACKUP_UTILITY
|
Use nbackup to create database copies |
CHANGE_SHUTDOWN_MODE
|
Shut down database and bring online |
TRACE_ANY_ATTACHMENT
|
Trace other users' attachments |
MONITOR_ANY_ATTACHMENT
|
Monitor (tables |
ACCESS_SHUTDOWN_DATABASE
|
Access database when it is shut down |
CREATE_DATABASE
|
Create new databases (given in the security database) |
DROP_DATABASE
|
Drop this database |
USE_GBAK_UTILITY
|
Use gbak utility |
USE_GSTAT_UTILITY
|
Use gstat utility |
USE_GFIX_UTILITY
|
Use gfix utility |
IGNORE_DB_TRIGGERS
|
Instruct engine not to run DB-level triggers |
CHANGE_HEADER_SETTINGS
|
Modify parameters in DB header page |
SELECT_ANY_OBJECT_IN_DATABASE
|
Use |
ACCESS_ANY_OBJECT_IN_DATABASE
|
Access (in any possible way) any object |
MODIFY_ANY_OBJECT_IN_DATABASE
|
Modify (up to drop) any object |
CHANGE_MAPPING_RULES
|
Change authentication mappings |
USE_GRANTED_BY_CLAUSE
|
Use |
GRANT_REVOKE_ON_ANY_OBJECT
|
|
GRANT_REVOKE_ANY_DDL_RIGHT
|
|
CREATE_PRIVILEGED_ROLES
|
Use |
GET_DBCRYPT_INFO
|
Get database encryption information |
MODIFY_EXT_CONN_POOL
|
Use command |
REPLICATE_INTO_DATABASE
|
Use replication API to load change sets into database |
PROFILE_ANY_ATTACHMENT
|
Profile attachments of other users |
14.2. SQL Statements for User Management
This section describes the SQL statements for creating, altering and dropping Firebird user accounts. These statements can be executed by the following users:
-
SYSDBA
-
Any user with the
RDB$ADMIN
role in the security database -
When the
AUTO ADMIN MAPPING
flag is enabled in the security database (security5.fdb
or the security database configured for the current database in thedatabases.conf
), any Windows Administrator — assumingWin_Sspi
was used to connect without specifying roles. -
Any user with the system privilege
USER_MANAGEMENT
in the security databaseFor a Windows Administrator,
AUTO ADMIN MAPPING
enabled only in a regular database is not sufficient to permit management of other users. For instructions to enable it in the security database, see Auto Admin Mapping in the Security Database.
Non-privileged users can use only the ALTER USER
statement, and then only to modify some data of their own account.
14.2.1. CREATE USER
Creates a Firebird user account
DSQL
CREATE USER username <user_option> [<user_option> ...] [TAGS (<user_var> [, <user_var> ...]] <user_option> ::= PASSWORD 'password' | FIRSTNAME 'firstname' | MIDDLENAME 'middlename' | LASTNAME 'lastname' | {GRANT | REVOKE} ADMIN ROLE | {ACTIVE | INACTIVE} | USING PLUGIN plugin_name <user_var> ::= tag_name = 'tag_value' | DROP tag_name
Parameter | Description |
---|---|
username |
Username. The maximum length is 63 characters, following the rules for Firebird identifiers. |
password |
User password. Valid or effective password length depends on the user manager plugin. Case-sensitive. |
firstname |
Optional: User’s first name. Maximum length 32 characters |
middlename |
Optional: User’s middle name. Maximum length 32 characters |
lastname |
Optional: User’s last name. Maximum length 32 characters. |
plugin_name |
Name of the user manager plugin. |
tag_name |
Name of a custom attribute. The maximum length is 63 characters, following the rules for Firebird regular identifiers. |
tag_value |
Value of the custom attribute. The maximum length is 255 characters. |
If the user already exist in the Firebird security database for the specified user manager plugin, an error is raised. It is possible to create multiple users with the same name: one per user manager plugin.
The username argument must follow the rules for Firebird regular identifiers: see Identifiers in the Structure chapter. Usernames are case-sensitive when double-quoted (in other words, they follow the same rules as other delimited identifiers).
Usernames follow the general rules and syntax of identifiers.
Thus, a user named
|
The PASSWORD
clause specifies the user’s password, and is required.
The valid or effective password length depends on the user manager plugin, see also User Authentication.
The optional FIRSTNAME
, MIDDLENAME
and LASTNAME
clauses can be used to specify additional user properties, such as the person’s first name, middle name and last name, respectively.
These are VARCHAR(32)
fields and can be used to store anything you prefer.
If the GRANT ADMIN ROLE
clause is specified, the new user account is created with the privileges of the RDB$ADMIN
role in the security database (security5.fdb
or database-specific).
It allows the new user to manage user accounts from any regular database they log into, but it does not grant the user any special privileges on objects in those databases.
The REVOKE ADMIN ROLE
clause is syntactically valid in a CREATE USER
statement, but has no effect.
It is not possible to specify GRANT ADMIN ROLE
and REVOKE ADMIN ROLE
in one statement.
The ACTIVE
clause specifies the user is active and can log in, this is the default.
The INACTIVE
clause specifies the user is inactive and cannot log in.
It is not possible to specify ACTIVE
and INACTIVE
in one statement.
The ACTIVE
/INACTIVE
option is not supported by the Legacy_UserManager
and will be ignored.
The USING PLUGIN
clause explicitly specifies the user manager plugin to use for creating the user.
Only plugins listed in the UserManager
configuration for this database (firebird.conf
, or overridden in databases.conf
) are valid.
The default user manager (first in the UserManager
configuration) is applied when this clause is not specified.
Users of the same name created using different user manager plugins are different objects. Therefore, the user created with one user manager plugin can only be altered or dropped by that same plugin. From the perspective of ownership, and privileges and roles granted in a database, different user objects with the same name are considered one and the same user. |
The TAGS
clause can be used to specify additional user attributes.
Custom attributes are not supported (silently ignored) by the Legacy_UserManager
.
Custom attributes names follow the rules of Firebird identifiers, but are handled case-insensitive (for example, specifying both "A BC"
and "a bc"
will raise an error).
The value of a custom attribute can be a string of maximum 255 characters.
The DROP tag_name
option is syntactically valid in CREATE USER
, but behaves as if the property is not specified.
Users can view and alter their own custom attributes. Do not use this for sensitive or security related information. |
|
Who Can Create a User
To create a user account, the current user must have
-
administrator privileges in the security database
-
the
USER_MANAGEMENT
system privilege in the security database. Users with theUSER_MANAGEMENT
system privilege can not grant or revoke the admin role.
CREATE USER
Examples
-
Creating a user with the username
bigshot
:CREATE USER bigshot PASSWORD 'buckshot';
-
Creating a user with the
Legacy_UserManager
user manager pluginCREATE USER godzilla PASSWORD 'robot' USING PLUGIN Legacy_UserManager;
-
Creating the user
john
with custom attributes:CREATE USER john PASSWORD 'fYe_3Ksw' FIRSTNAME 'John' LASTNAME 'Doe' TAGS (BIRTHYEAR='1970', CITY='New York');
-
Creating an inactive user:
CREATE USER john PASSWORD 'fYe_3Ksw' INACTIVE;
-
Creating the user
superuser
with user management privileges:CREATE USER superuser PASSWORD 'kMn8Kjh' GRANT ADMIN ROLE;
14.2.2. ALTER USER
Alters a Firebird user account
DSQL
ALTER {USER username | CURRENT USER} [SET] [<user_option> [<user_option> ...]] [TAGS (<user_var> [, <user_var> ...]] <user_option> ::= PASSWORD 'password' | FIRSTNAME 'firstname' | MIDDLENAME 'middlename' | LASTNAME 'lastname' | {GRANT | REVOKE} ADMIN ROLE | {ACTIVE | INACTIVE} | USING PLUGIN plugin_name <user_var> ::= tag_name = 'tag_value' | DROP tag_name
See CREATE USER
for details on the statement parameters.
Any user can alter their own account, except that only an administrator may use GRANT/REVOKE ADMIN ROLE
and ACTIVE/INACTIVE
.
All clauses are optional, but at least one other than USING PLUGIN
must be present:
-
The
PASSWORD
parameter is for changing the password for the user -
FIRSTNAME
,MIDDLENAME
andLASTNAME
update these optional user properties, such as the person’s first name, middle name and last name respectively -
GRANT ADMIN ROLE
grants the user the privileges of theRDB$ADMIN
role in the security database (security5.fdb
), enabling them to manage the accounts of other users. It does not grant the user any special privileges in regular databases. -
REVOKE ADMIN ROLE
removes the user’s administrator in the security database which, once the transaction is committed, will deny that user the ability to alter any user account except their own -
ACTIVE
will enable a disabled account (not supported forLegacy_UserManager
) -
INACTIVE
will disable an account (not supported forLegacy_UserManager
). This is convenient to temporarily disable an account without deleting it. -
USING PLUGIN
specifies the user manager plugin to use -
TAGS
can be used to add, update or remove (DROP
) additional custom attributes (not supported forLegacy_UserManager
). Attributes not listed will not be changed.
See CREATE USER
for more details on the clauses.
If you need to change your own account, then instead of specifying the name of the current user, you can use the CURRENT USER
clause.
The |
Remember to commit your work if you are working in an application that does not auto-commit DDL. |
Who Can Alter a User?
To modify the account of another user, the current user must have
-
administrator privileges in the security database
-
the
USER_MANAGEMENT
system privilege in the security database Users with theUSER_MANAGEMENT
system privilege can not grant or revoke the admin role.
Anyone can modify their own account, except for the GRANT/REVOKE ADMIN ROLE
and ACTIVE/INACTIVE
options, which require administrative privileges to change.
ALTER USER
Examples
-
Changing the password for the user
bobby
and granting them user management privileges:ALTER USER bobby PASSWORD '67-UiT_G8' GRANT ADMIN ROLE;
-
Editing the optional properties (the first and last names) of the user
dan
:ALTER USER dan FIRSTNAME 'No_Jack' LASTNAME 'Kennedy';
-
Revoking user management privileges from user
dumbbell
:ALTER USER dumbbell DROP ADMIN ROLE;
14.2.3. CREATE OR ALTER USER
Creates a Firebird user account if it doesn’t exist, or alters a Firebird user account
DSQL
CREATE OR ALTER USER username [SET] [<user_option> [<user_option> ...]] [TAGS (<user_var> [, <user_var> ...]] <user_option> ::= PASSWORD 'password' | FIRSTNAME 'firstname' | MIDDLENAME 'middlename' | LASTNAME 'lastname' | {GRANT | REVOKE} ADMIN ROLE | {ACTIVE | INACTIVE} | USING PLUGIN plugin_name <user_var> ::= tag_name = 'tag_value' | DROP tag_name
See CREATE USER
and ALTER USER
for details on the statement parameters.
If the user does not exist, it will be created as if executing a CREATE USER
statement.
If the user already exists, it will be modified as if executing an ALTER USER
statement.
The CREATE OR ALTER USER
statement must contain at least one of the optional clauses other than USING PLUGIN
.
If the user does not exist yet, the PASSWORD
clause is required.
Remember to commit your work if you are working in an application that does not auto-commit DDL. |
14.2.4. DROP USER
Drops a Firebird user account
DSQL
DROP USER username [USING PLUGIN plugin_name]
Parameter | Description |
---|---|
username |
Username |
plugin_name |
Name of the user manager plugin |
The optional USING PLUGIN
clause explicitly specifies the user manager plugin to use for dropping the user.
Only plugins listed in the UserManager
configuration for this database (firebird.conf
, or overridden in databases.conf
) are valid.
The default user manager (first in the UserManager
configuration) is applied when this clause is not specified.
Users of the same name created using different user manager plugins are different objects. Therefore, the user created with one user manager plugin can only be dropped by that same plugin. |
Remember to commit your work if you are working in an application that does not auto-commit DDL. |
Who Can Drop a User?
To drop a user, the current user must have
-
administrator privileges in the security database
-
the
USER_MANAGEMENT
system privilege in the security database
14.3. SQL Privileges
The second level of Firebird’s security model is SQL privileges. Whilst a successful login — the first level — authorises a user’s access to the server and to all databases under that server, it does not imply that the user has access to any objects in any databases. When an object is created, only the user that created it (its owner) and administrators have access to it. The user needs privileges on each object they need to access. As a general rule, privileges must be granted explicitly to a user by the object owner or an administrator of the database.
A privilege comprises a DML access type (SELECT
, INSERT
, UPDATE
, DELETE
, EXECUTE
and REFERENCES
), the name of a database object (table, view, procedure, role) and the name of the grantee (user, procedure, trigger, role).
Various means are available to grant multiple types of access on an object to multiple users in a single GRANT
statement.
Privileges may be revoked from a user with REVOKE
statements.
An additional type of privileges, DDL privileges, provide rights to create, alter or drop specific types of metadata objects. System privileges provide a subset of administrator permissions to a role (and indirectly, to a user).
Privileges are stored in the database to which they apply and are not applicable to any other database, except the DATABASE
DDL privileges, which are stored in the security database.
14.4. ROLE
A role is a database object that packages a set of privileges. Roles implement the concept of access control at a group level. Multiple privileges are granted to the role and then that role can be granted to or revoked from one or many users, or even other roles.
A role that has been granted as a “default” role will be activated automatically.
Otherwise, a user must supply that role in their login credentials — or with SET ROLE
— to exercise the associated privileges.
Any other privileges granted to the user directly are not affected by their login with the role.
Logging in with multiple explicit roles simultaneously is not supported, but a user can have multiple default roles active at the same time.
In this section the tasks of creating and dropping roles are discussed.
14.4.1. CREATE ROLE
Creates a role
DSQL, ESQL
CREATE ROLE rolename [SET SYSTEM PRIVILEGES TO <sys_privileges>] <sys_privileges> ::= <sys_privilege> [, <sys_privilege> ...] <sys_privilege> ::= USER_MANAGEMENT | READ_RAW_PAGES | CREATE_USER_TYPES | USE_NBACKUP_UTILITY | CHANGE_SHUTDOWN_MODE | TRACE_ANY_ATTACHMENT | MONITOR_ANY_ATTACHMENT | ACCESS_SHUTDOWN_DATABASE | CREATE_DATABASE | DROP_DATABASE | USE_GBAK_UTILITY | USE_GSTAT_UTILITY | USE_GFIX_UTILITY | IGNORE_DB_TRIGGERS | CHANGE_HEADER_SETTINGS | SELECT_ANY_OBJECT_IN_DATABASE | ACCESS_ANY_OBJECT_IN_DATABASE | MODIFY_ANY_OBJECT_IN_DATABASE | CHANGE_MAPPING_RULES | USE_GRANTED_BY_CLAUSE | GRANT_REVOKE_ON_ANY_OBJECT | GRANT_REVOKE_ANY_DDL_RIGHT | CREATE_PRIVILEGED_ROLES | GET_DBCRYPT_INFO | MODIFY_EXT_CONN_POOL | REPLICATE_INTO_DATABASE | PROFILE_ANY_ATTACHMENT
Parameter | Description |
---|---|
rolename |
Role name. The maximum length is 63 characters |
sys_privilege |
System privilege to grant |
The statement CREATE ROLE
creates a new role object, to which one or more privileges can be granted subsequently.
The name of a role must be unique among the names of roles in the current database.
It is advisable to make the name of a role unique among usernames as well. The system will not prevent the creation of a role whose name clashes with an existing username, but if it happens, the user will be unable to connect to the database. |
Who Can Create a Role
The CREATE ROLE
statement can be executed by:
-
Users with the
CREATE ROLE
privilege-
Setting system privileges also requires the system privilege
CREATE_PRIVILEGED_ROLES
-
The user executing the CREATE ROLE
statement becomes the owner of the role.
CREATE ROLE
Examples
SELLERS
CREATE ROLE SELLERS;
SELECT_ALL
with the system privilege to select from any selectable objectCREATE ROLE SELECT_ALL
SET SYSTEM PRIVILEGES TO SELECT_ANY_OBJECT_IN_DATABASE;
ALTER ROLE
, DROP ROLE
, GRANT
, REVOKE
, Fine-grained System Privileges
14.4.2. ALTER ROLE
Alters a role
DSQL
ALTER ROLE rolename
{ SET SYSTEM PRIVILEGES TO <sys_privileges>
| DROP SYSTEM PRIVILEGES
| {SET | DROP} AUTO ADMIN MAPPING }
<sys_privileges> ::=
!! See CREATE ROLE
!!
Parameter | Description |
---|---|
rolename |
Role name;
specifying anything other than |
sys_privilege |
System privilege to grant |
ALTER ROLE
can be used to grant or revoke system privileges from a role, or enable and disable the capability for Windows Administrators to assume administrator privileges automatically when logging in.
This last capability can affect only one role: the system-generated role RDB$ADMIN
.
For details on auto admin mapping, see AUTO ADMIN MAPPING
.
It is not possible to selectively grant or revoke system privileges.
Only the privileges listed in the SET SYSTEM PRIVILEGES
clause will be available to the role after commit, and DROP SYSTEM PRIVILEGES
will remove all system privileges from this role.
Who Can Alter a Role
The ALTER ROLE
statement can be executed by:
-
Users with the
ALTER ANY ROLE
privilege, with the following caveats-
Setting or dropping system privileges also requires the system privilege
CREATE_PRIVILEGED_ROLES
-
Setting or dropping auto admin mapping also requires the system privilege
CHANGE_MAPPING_RULES
-
14.4.3. DROP ROLE
Drops a role
DSQL, ESQL
DROP ROLE rolename
The statement DROP ROLE
deletes an existing role.
It takes a single argument, the name of the role.
Once the role is deleted, the entire set of privileges is revoked from all users and objects that were granted the role.
14.5. Statements for Granting Privileges
A GRANT
statement is used for granting privileges — including roles — to users and other database objects.
14.5.1. GRANT
Grants privileges and assigns roles
DSQL, ESQL
GRANT <privileges>
TO <grantee_list>
[WITH GRANT OPTION]
[{GRANTED BY | AS} [USER] grantor]
<privileges> ::=
<table_privileges> | <execute_privilege>
| <usage_privilege> | <ddl_privileges>
| <db_ddl_privilege>
<table_privileges> ::=
{ALL [PRIVILEGES] | <table_privilege_list> }
ON [TABLE] {table_name | view_name}
<table_privilege_list> ::=
<table_privilege> [, <tableprivilege> ...]
<table_privilege> ::=
SELECT | DELETE | INSERT
| UPDATE [(col [, col ...])]
| REFERENCES [(col [, col ...)]
<execute_privilege> ::= EXECUTE ON
{ PROCEDURE proc_name | FUNCTION func_name
| PACKAGE package_name }
<usage_privilege> ::= USAGE ON
{ EXCEPTION exception_name
| {GENERATOR | SEQUENCE} sequence_name }
<ddl_privileges> ::=
{ALL [PRIVILEGES] | <ddl_privilege_list>} <object_type>
<ddl_privilege_list> ::=
<ddl_privilege> [, <ddl_privilege> ...]
<ddl_privilege> ::= CREATE | ALTER ANY | DROP ANY
<object_type> ::=
CHARACTER SET | COLLATION | DOMAIN | EXCEPTION
| FILTER | FUNCTION | GENERATOR | PACKAGE
| PROCEDURE | ROLE | SEQUENCE | TABLE | VIEW
<db_ddl_privileges> ::=
{ALL [PRIVILEGES] | <db_ddl_privilege_list>} {DATABASE | SCHEMA}
<db_ddl_privilege_list> ::=
<db_ddl_privilege> [, <db_ddl_privilege> ...]
<db_ddl_privilege> ::= CREATE | ALTER | DROP
<grantee_list> ::= <grantee> [, <grantee> ...]
<grantee> ::=
PROCEDURE proc_name | FUNCTION func_name
| PACKAGE package_name | TRIGGER trig_name
| VIEW view_name | ROLE role_name
| [USER] username | GROUP Unix_group
| SYSTEM PRIVILEGE <sys_privilege>
<sys_privilege> ::=
!! See CREATE ROLE
!!
GRANT <role_granted_list> TO <role_grantee_list> [WITH ADMIN OPTION] [{GRANTED BY | AS} [USER] grantor] <role_granted_list> ::= <role_granted> [, <role_granted ...] <role_granted> ::= [DEFAULT] role_name <role_grantee_list> ::= <role_grantee> [, <role_grantee> ...] <role_grantee> ::= user_or_role_name | USER username | ROLE role_name
Parameter | Description |
---|---|
grantor |
The user granting the privilege(s) |
table_name |
The name of a table |
view_name |
The name of a view |
col |
The name of table column |
proc_name |
The name of a stored procedure |
func_name |
The name of a stored function (or UDF) |
package_name |
The name of a package |
exception_name |
The name of an exception |
sequence_name |
The name of a sequence (generator) |
object_type |
The type of metadata object |
trig_name |
The name of a trigger |
role_name |
Role name |
username |
The username to which the privileges are granted to or to which the role is assigned.
If the |
Unix_group |
The name of a user group in a POSIX operating system |
sys_privilege |
A system privilege |
user_or_role_name |
Name of a user or role |
The GRANT
statement grants one or more privileges on database objects to users, roles, or other database objects.
A regular, authenticated user has no privileges on any database object until they are explicitly granted to that individual user, to a role granted to the user as a default role, or to all users bundled as the user PUBLIC
.
When an object is created, only its creator (the owner) and administrators have privileges to it, and can grant privileges to other users, roles, or objects.
Different sets of privileges apply to different types of metadata objects. The different types of privileges will be described separately later in this section.
|
The TO
Clause
The TO
clause specifies the users, roles, and other database objects that are to be granted the privileges enumerated in privileges.
The clause is mandatory.
The optional USER
keyword in the TO
clause allow you to specify exactly who or what is granted the privilege.
If a USER
(or ROLE
) keyword is not specified, the server first checks for a role with this name and, if there is no such role, the privileges are granted to the user with that name without further checking.
It is recommended to always explicitly specify |
|
Packaging Privileges in a ROLE
Object
A role is a “container” object that can be used to package a collection of privileges. Use of the role is then granted to each user or role that requires those privileges. A role can also be granted to a list of users or roles.
The role must exist before privileges can be granted to it.
See CREATE ROLE
for the syntax and rules.
The role is maintained by granting privileges to it and, when required, revoking privileges from it.
When a role is dropped — see DROP ROLE
— all users lose the privileges acquired through the role.
Any privileges that were granted additionally to an affected user by way of a different grant statement are retained.
Unless the role is granted as a default role, a user that is granted a role must explicitly specify that role, either with their login credentials or activating it using SET ROLE
, to exercise the associated privileges.
Any other privileges granted to the user or received through default roles are not affected by explicitly specifying a role.
More than one role can be granted to the same user. Although only one role can be explicitly specified, multiple roles can be active for a user, either as default roles, or as roles granted to the current role.
A role can be granted to a user or to another role.
Cumulative Roles
The ability to grant roles to other roles and default roles results in so-called cumulative roles. Multiple roles can be active for a user, and the user receives the cumulative privileges of all those roles.
When a role is explicitly specified on connect or using SET ROLE
, the user will assume all privileges granted to that role, including those privileges granted to the secondary roles (including roles granted on that secondary role, etc).
Or in other words, when the primary role is explicitly specified, the secondary roles are also activated.
The function RDB$ROLE_IN_USE
can be used to check if a role is currently active.
See also Default Roles for the effects of DEFAULT
with cumulative roles, and The WITH ADMIN OPTION
Clause for effects on granting.
Default Roles
A role can be granted as a default role by prefixing the role with DEFAULT
in the GRANT
statement.
Granting roles as a default role to users simplifies management of privileges, as this makes it possible to group privileges on a role and granting that group of privileges to a user without requiring the user to explicitly specify the role.
Users can receive multiple default roles, granting them all privileges of those default roles.
The effects of a default role depend on whether the role is granted to a user or to another role:
-
When a role is granted to a user as a default role, the role will be activated automatically, and its privileges will be applied to the user without the need to explicitly specify the role.
Roles that are active by default are not returned from
CURRENT_ROLE
, but the functionRDB$ROLE_IN_USE
can be used to check if a role is currently active. -
When a role is granted to another role as a default role, the rights of that role will only be automatically applied to the user if the primary role is granted as a default role to the user, otherwise the primary role needs to be specified explicitly (in other words, it behaves the same as when the secondary role was granted without the
DEFAULT
clause).For a linked list of granted roles, all roles need to be granted as a default role for them to be applied automatically. That is, for
GRANT DEFAULT ROLEA TO ROLE ROLEB
,GRANT ROLEB TO ROLE ROLEC
,GRANT DEFAULT ROLEC TO USER USER1
onlyROLEC
is active by default forUSER1
. To assume the privileges ofROLEA
andROLEB
,ROLEC
needs to be explicitly specified, orROLEB
needs to be grantedDEFAULT
toROLEC
.
The User PUBLIC
Firebird has a predefined user named PUBLIC
, that represents all users.
Privileges for operations on a particular object that are granted to the user PUBLIC
can be exercised by any authenticated user.
If privileges are granted to the user |
The WITH GRANT OPTION
Clause
The optional WITH GRANT OPTION
clause allows the users specified in the user list to grant the privileges specified in the privilege list to other users.
It is possible to assign this option to the user |
The GRANTED BY
Clause
By default, when privileges are granted in a database, the current user is recorded as the grantor.
The GRANTED BY
clause enables the current user to grant those privileges as another user.
When using the REVOKE
statement, it will fail if the current user is not the user that was named in the GRANTED BY
clause.
The GRANTED BY
(and AS
) clause can be used only by the database owner and other administrators.
The object owner cannot use GRANTED BY
unless they also have administrator privileges.
Privileges on Tables and Views
For tables and views, unlike other metadata objects, it is possible to grant several privileges at once.
SELECT
-
Permits the user or object to
SELECT
data from the table or view INSERT
-
Permits the user or object to
INSERT
rows into the table or view DELETE
-
Permits the user or object to
DELETE
rows from the table or view UPDATE
-
Permits the user or object to
UPDATE
rows in the table or view, optionally restricted to specific columns REFERENCES
-
Permits the user or object to reference the table via a foreign key, optionally restricted to the specified columns. If the primary or unique key referenced by the foreign key of the other table is composite then all columns of the key must be specified.
ALL [PRIVILEGES]
-
Combines
SELECT
,INSERT
,UPDATE
,DELETE
andREFERENCES
privileges in a single package
Examples of GRANT <privilege>
on Tables
-
SELECT
andINSERT
privileges to the userALEX
:GRANT SELECT, INSERT ON TABLE SALES TO USER ALEX;
-
The
SELECT
privilege to theMANAGER
,ENGINEER
roles and to the userIVAN
:GRANT SELECT ON TABLE CUSTOMER TO ROLE MANAGER, ROLE ENGINEER, USER IVAN;
-
All privileges to the
ADMINISTRATOR
role, together with the authority to grant the same privileges to others:GRANT ALL ON TABLE CUSTOMER TO ROLE ADMINISTRATOR WITH GRANT OPTION;
-
The
SELECT
andREFERENCES
privileges on theNAME
column to all users and objects:GRANT SELECT, REFERENCES (NAME) ON TABLE COUNTRY TO PUBLIC;
-
The
SELECT
privilege being granted to the userIVAN
by the userALEX
:GRANT SELECT ON TABLE EMPLOYEE TO USER IVAN GRANTED BY ALEX;
-
Granting the
UPDATE
privilege on theFIRST_NAME
,LAST_NAME
columns:GRANT UPDATE (FIRST_NAME, LAST_NAME) ON TABLE EMPLOYEE TO USER IVAN;
-
Granting the
INSERT
privilege to the stored procedureADD_EMP_PROJ
:GRANT INSERT ON EMPLOYEE_PROJECT TO PROCEDURE ADD_EMP_PROJ;
The EXECUTE
Privilege
The EXECUTE
privilege applies to stored procedures, stored functions (including UDFs), and packages.
It allows the grantee to execute the specified object, and, if applicable, to retrieve its output.
In the case of selectable stored procedures, it acts somewhat like a SELECT
privilege, insofar as this style of stored procedure is executed in response to a SELECT
statement.
For packages, the EXECUTE
privilege can only be granted for the package as a whole, not for individual subroutines.
Examples of Granting the EXECUTE
Privilege
-
Granting the
EXECUTE
privilege on a stored procedure to a role:GRANT EXECUTE ON PROCEDURE ADD_EMP_PROJ TO ROLE MANAGER;
-
Granting the
EXECUTE
privilege on a stored function to a role:GRANT EXECUTE ON FUNCTION GET_BEGIN_DATE TO ROLE MANAGER;
-
Granting the
EXECUTE
privilege on a package to userPUBLIC
:GRANT EXECUTE ON PACKAGE APP_VAR TO USER PUBLIC;
-
Granting the
EXECUTE
privilege on a function to a package:GRANT EXECUTE ON FUNCTION GET_BEGIN_DATE TO PACKAGE APP_VAR;
The USAGE
Privilege
To be able to use metadata objects other than tables, views, stored procedures or functions, triggers and packages, it is necessary to grant the user (or database object like trigger, procedure or function) the USAGE
privilege on these objects.
By default, Firebird executes PSQL modules with the privileges of the caller, so it is necessary that either the user or otherwise the routine itself has been granted the USAGE
privilege.
This can be changed with the SQL SECURITY
clause of the DDL statements of those objects.
The |
For sequences (generators), the |
Examples of Granting the USAGE
Privilege
-
Granting the
USAGE
privilege on a sequence to a role:GRANT USAGE ON SEQUENCE GEN_AGE TO ROLE MANAGER;
-
Granting the
USAGE
privilege on a sequence to a trigger:GRANT USAGE ON SEQUENCE GEN_AGE TO TRIGGER TR_AGE_BI;
-
Granting the
USAGE
privilege on an exception to a package:GRANT USAGE ON EXCEPTION TO PACKAGE PKG_BILL;
DDL Privileges
By default, only administrators can create new metadata objects. Altering or dropping these objects is restricted to the owner of the object (its creator) and administrators. DDL privileges can be used to grant privileges for these operations to other users.
CREATE
-
Allows creation of an object of the specified type
ALTER ANY
-
Allows modification of any object of the specified type
DROP ANY
-
Allows deletion of any object of the specified type
ALL [PRIVILEGES]
-
Combines the
CREATE
,ALTER ANY
andDROP ANY
privileges for the specified type
There are no separate DDL privileges for triggers and indexes.
The necessary privileges are inherited from the table or view.
Creating, altering or dropping a trigger or index requires the ALTER ANY TABLE
or ALTER ANY VIEW
privilege.
Database DDL Privileges
The syntax for granting privileges to create, alter or drop a database deviates from the normal syntax of granting DDL privileges for other object types.
CREATE
-
Allows creation of a database
ALTER
-
Allows modification of the current database
DROP
-
Allows deletion of the current database
ALL [PRIVILEGES]
-
Combines the
ALTER
andDROP
privileges.ALL
does not include theCREATE
privilege.
The ALTER DATABASE
and DROP DATABASE
privileges apply only to the current database, whereas DDL privileges ALTER ANY
and DROP ANY
on other object types apply to all objects of the specified type in the current database.
The privilege to alter or drop the current database can only be granted by administrators.
The CREATE DATABASE
privilege is a special kind of privilege as it is saved in the security database.
A list of users with the CREATE DATABASE
privilege is available from the virtual table SEC$DB_CREATORS
.
Only administrators in the security database can grant the privilege to create a new database.
|
Examples of Granting Database DDL Privileges
-
Granting
SUPERUSER
the privilege to create databases:GRANT CREATE DATABASE TO USER Superuser;
-
Granting
JOE
the privilege to executeALTER DATABASE
for the current database:GRANT ALTER DATABASE TO USER Joe;
-
Granting
FEDOR
the privilege to drop the current database:GRANT DROP DATABASE TO USER Fedor;
Assigning Roles
Assigning a role is similar to granting a privilege.
One or more roles can be assigned to one or more users, including the user PUBLIC
, using one GRANT
statement.
The WITH ADMIN OPTION
Clause
The optional WITH ADMIN OPTION
clause allows the users specified in the user list to grant the role(s) specified to other users or roles.
It is possible to assign this option to |
For cumulative roles, a user can only exercise the WITH ADMIN OPTION
of a secondary role if all intermediate roles are also granted WITH ADMIN OPTION
.
That is, GRANT ROLEA TO ROLE ROLEB WITH ADMIN OPTION
, GRANT ROLEB TO ROLE ROLEC
, GRANT ROLEC TO USER USER1 WITH ADMIN OPTION
only allows USER1
to grant ROLEC
to other users or roles, while using GRANT ROLEB TO ROLE ROLEC WITH ADMIN OPTION
allows USER1
to grant ROLEA
, ROLEB
and ROLEC
to other users.
Examples of Role Assignment
-
Assigning the
DIRECTOR
andMANAGER
roles to the userIVAN
:GRANT DIRECTOR, MANAGER TO USER IVAN;
-
Assigning the
MANAGER
role to the userALEX
with the authority to assign this role to other users:GRANT MANAGER TO USER ALEX WITH ADMIN OPTION;
-
Assigning the
DIRECTOR
role to userALEX
as a default role:GRANT DEFAULT DIRECTOR TO USER ALEX;
-
Assigning the
MANAGER
role to roleDIRECTOR
:GRANT MANAGER TO ROLE DIRECTOR;
14.6. Statements for Revoking Privileges
A REVOKE
statement is used for revoking privileges — including roles — from users and other database objects.
14.6.1. REVOKE
Revokes privileges or role assignments
DSQL, ESQL
REVOKE [GRANT OPTION FOR] <privileges>
FROM <grantee_list>
[{GRANTED BY | AS} [USER] grantor]
<privileges> ::=
!! See GRANT
syntax !!
REVOKE [ADMIN OPTION FOR] <role_granted_list> FROM <role_grantee_list> [{GRANTED BY | AS} [USER] grantor] <role_granted_list> ::= !! SeeGRANT
syntax !! <role_grantee_list> ::= !! SeeGRANT
syntax !!
REVOKE ALL ON ALL FROM <grantee_list>
<grantee_list> ::=
!! See GRANT
syntax !!
Parameter | Description |
---|---|
grantor |
The grantor user on whose behalf the privilege(s) are being revoked |
The REVOKE
statement revokes privileges that were granted using the GRANT
statement from users, roles, and other database objects.
See GRANT
for detailed descriptions of the various types of privileges.
Only the user who granted the privilege can revoke it.
The DEFAULT
Clause
When the DEFAULT
clause is specified, the role itself is not revoked, only its DEFAULT
property is removed without revoking the role itself.
The FROM
Clause
The FROM
clause specifies a list of users, roles and other database objects that will have the enumerated privileges revoked.
The optional USER
keyword in the FROM
clause allow you to specify exactly which type is to have the privilege revoked.
If a USER
(or ROLE
) keyword is not specified, the server first checks for a role with this name and, if there is no such role, the privileges are revoked from the user with that name without further checking.
|
Revoking Privileges from user
PUBLIC Privileges that were granted to the special user named |
Revoking the GRANT OPTION
The optional GRANT OPTION FOR
clause revokes the user’s privilege to grant the specified privileges to other users, roles, or database objects (as previously granted with the WITH GRANT OPTION
).
It does not revoke the specified privilege itself.
Removing the Privilege to One or More Roles
One usage of the REVOKE
statement is to remove roles that were assigned to a user, or a group of users, by a GRANT
statement.
In the case of multiple roles and/or multiple grantees, the REVOKE
verb is followed by the list of roles that will be removed from the list of users specified after the FROM
clause.
The optional ADMIN OPTION FOR
clause provides the means to revoke the grantee’s “administrator” privilege, the ability to assign the same role to other users, without revoking the grantee’s privilege to the role.
Multiple roles and grantees can be processed in a single statement.
Revoking Privileges That Were GRANTED BY
A privilege that has been granted using the GRANTED BY
clause is internally attributed explicitly to the grantor designated by that original GRANT
statement.
Only that user can revoke the granted privilege.
Using the GRANTED BY
clause you can revoke privileges as if you are the specified user.
To revoke a privilege with GRANTED BY
, the current user must be logged in either with full administrative privileges, or as the user designated as grantor by that GRANTED BY
clause.
Not even the owner of a role can use |
The non-standard AS
clause is supported as a synonym of the GRANTED BY
clause to simplify migration from other database systems.
Revoking ALL ON ALL
The REVOKE ALL ON ALL
statement allows a user to revoke all privileges (including roles) on all object from one or more users, roles or other database objects.
It is a quick way to “clear” privileges when access to the database must be blocked for a particular user or role.
When the current user is logged in with full administrator privileges in the database, the REVOKE ALL ON ALL
will remove all privileges, no matter who granted them.
Otherwise, only the privileges granted by the current user are removed.
The |
Examples using REVOKE
-
Revoking the privileges for selecting and inserting into the table (or view)
SALES
REVOKE SELECT, INSERT ON TABLE SALES FROM USER ALEX;
-
Revoking the privilege for selecting from the
CUSTOMER
table from theMANAGER
andENGINEER
roles and from the userIVAN
:REVOKE SELECT ON TABLE CUSTOMER FROM ROLE MANAGER, ROLE ENGINEER, USER IVAN;
-
Revoking from the
ADMINISTRATOR
role the privilege to grant any privileges on theCUSTOMER
table to other users or roles:REVOKE GRANT OPTION FOR ALL ON TABLE CUSTOMER FROM ROLE ADMINISTRATOR;
-
Revoking the privilege for selecting from the
COUNTRY
table and the privilege to reference theNAME
column of theCOUNTRY
table from any user, via the special userPUBLIC
:REVOKE SELECT, REFERENCES (NAME) ON TABLE COUNTRY FROM PUBLIC;
-
Revoking the privilege for selecting form the
EMPLOYEE
table from the userIVAN
, that was granted by the userALEX
:REVOKE SELECT ON TABLE EMPLOYEE FROM USER IVAN GRANTED BY ALEX;
-
Revoking the privilege for updating the
FIRST_NAME
andLAST_NAME
columns of theEMPLOYEE
table from the userIVAN
:REVOKE UPDATE (FIRST_NAME, LAST_NAME) ON TABLE EMPLOYEE FROM USER IVAN;
-
Revoking the privilege for inserting records into the
EMPLOYEE_PROJECT
table from theADD_EMP_PROJ
procedure:REVOKE INSERT ON EMPLOYEE_PROJECT FROM PROCEDURE ADD_EMP_PROJ;
-
Revoking the privilege for executing the procedure
ADD_EMP_PROJ
from theMANAGER
role:REVOKE EXECUTE ON PROCEDURE ADD_EMP_PROJ FROM ROLE MANAGER;
-
Revoking the privilege to grant the
EXECUTE
privilege for the functionGET_BEGIN_DATE
to other users from the roleMANAGER
:REVOKE GRANT OPTION FOR EXECUTE ON FUNCTION GET_BEGIN_DATE FROM ROLE MANAGER;
-
Revoking the
EXECUTE
privilege on the packageDATE_UTILS
from userALEX
:REVOKE EXECUTE ON PACKAGE DATE_UTILS FROM USER ALEX;
-
Revoking the
USAGE
privilege on the sequenceGEN_AGE
from the roleMANAGER
:REVOKE USAGE ON SEQUENCE GEN_AGE FROM ROLE MANAGER;
-
Revoking the
USAGE
privilege on the sequenceGEN_AGE
from the triggerTR_AGE_BI
:REVOKE USAGE ON SEQUENCE GEN_AGE FROM TRIGGER TR_AGE_BI;
-
Revoking the
USAGE
privilege on the exceptionE_ACCESS_DENIED
from the packagePKG_BILL
:REVOKE USAGE ON EXCEPTION E_ACCESS_DENIED FROM PACKAGE PKG_BILL;
-
Revoking the privilege to create tables from user
JOE
:REVOKE CREATE TABLE FROM USER Joe;
-
Revoking the privilege to alter any procedure from user
JOE
:REVOKE ALTER ANY PROCEDURE FROM USER Joe;
-
Revoking the privilege to create databases from user
SUPERUSER
:REVOKE CREATE DATABASE FROM USER Superuser;
-
Revoking the
DIRECTOR
andMANAGER
roles from the userIVAN
:REVOKE DIRECTOR, MANAGER FROM USER IVAN;
-
Revoke from the user
ALEX
the privilege to grant theMANAGER
role to other users:REVOKE ADMIN OPTION FOR MANAGER FROM USER ALEX;
-
Revoking all privileges (including roles) on all objects from the user
IVAN
:REVOKE ALL ON ALL FROM USER IVAN;
After this statement is executed by an administrator, the user
IVAN
will have no privileges whatsoever, except those granted throughPUBLIC
. -
Revoking the
DEFAULT
property of theDIRECTOR
role from userALEX
, while the role itself remains granted:REVOKE DEFAULT DIRECTOR FROM USER ALEX;
14.7. Mapping of Users to Objects
Now Firebird support multiple security databases, new problems arise that could not occur with a single, global security database. Clusters of databases using the same security database are effectively separated. Mappings provide the means to achieve the same effect when multiple databases are using their own security databases. Some cases require control for limited interaction between such clusters. For example:
-
when
EXECUTE STATEMENT ON EXTERNAL DATA SOURCE
requires data exchange between clusters -
when server-wide SYSDBA access to databases is needed from other clusters, using services.
-
On Windows, due to support for Trusted User authentication: to map Windows users to a Firebird user and/or role. An example is the need for a
ROLE
granted to a Windows group to be assigned automatically to members of that group.
The single solution for all such cases is mapping the login information assigned to a user when it connects to a Firebird server to internal security objects in a database — CURRENT_USER
and CURRENT_ROLE
.
14.7.1. The Mapping Rule
The mapping rule consists of four pieces of information:
-
mapping scope — whether the mapping is local to the current database or whether its effect is to be global, affecting all databases in the cluster, including security databases
-
mapping name — an SQL identifier, since mappings are objects in a database, like any other
-
the object FROM which the mapping maps. It consists of four items:
-
The authentication source
-
plugin name or
-
the product of a mapping in another database or
-
use of server-wide authentication or
-
any method
-
-
The name of the database where authentication succeeded
-
The name of the object from which mapping is performed
-
The type of that name — username, role, or OS group — depending upon the plugin that added that name during authentication.
Any item is accepted but only type is required.
-
-
the object TO which the mapping maps. It consists of two items:
-
The name of the object TO which mapping is performed
-
The type, for which only
USER
orROLE
is valid
-
14.7.2. CREATE MAPPING
Creates a mapping of a security object
DSQL
CREATE [GLOBAL] MAPPING name USING { PLUGIN plugin_name [IN database] | ANY PLUGIN [IN database | SERVERWIDE] | MAPPING [IN database] | '*' [IN database] } FROM {ANY type | type from_name} TO {USER | ROLE} [to_name]
Parameter | Description |
---|---|
name |
Mapping name
The maximum length is 63 characters.
Must be unique among all mapping names in the context (local or |
plugin_name |
Authentication plugin name |
database |
Name of the database that authenticated against |
type |
The type of object to be mapped. Possible types are plugin-specific. |
from_name |
The name of the object to be mapped |
to_name |
The name of the user or role to map to |
The CREATE MAPPING
statement creates a mapping of security objects (e.g. users, groups, roles) of one or more authentication plugins to internal security objects — CURRENT_USER
and CURRENT_ROLE
.
If the GLOBAL
clause is present, then the mapping will be applied not only for the current database, but for all databases in the same cluster, including security databases.
There can be global and local mappings with the same name. They are distinct objects. |
Global mapping works best if a Firebird 3.0 or higher version database is used as the security database.
If you plan to use another database for this purpose — using your own provider, for example — then you should create a table in it named |
The USING
clause describes the mapping source.
It has a complex set of options:
-
an explicit plugin name (
PLUGIN plugin_name
) means it applies only for that plugin -
it can use any available plugin (
ANY PLUGIN
); although not if the source is the product of a previous mapping -
it can be made to work only with server-wide plugins (
SERVERWIDE
) -
it can be made to work only with previous mapping results (
MAPPING
) -
you can omit to use of a specific method by using the asterisk (
*
) argument -
it can specify the name of the database that defined the mapping for the
FROM
object (IN database
)This argument is not valid for mapping server-wide authentication.
The FROM
clause describes the object to map.
The FROM
clause has a mandatory argument, the type of the object named.
It has the following options:
-
When mapping names from plugins, type is defined by the plugin
-
When mapping the product of a previous mapping, type can be only
USER
orROLE
-
If an explicit from_name is provided, it will be taken into account by this mapping
-
Use the
ANY
keyword to work with any name of the given type.
The TO
clause specifies the user or role that is the result of the mapping.
The to_name is optional.
If it is not specified, then the original name of the mapped object will be used.
For roles, the role defined by a mapping rule is only applied when the user does not explicitly specify a role on connect.
The mapped role can be assumed later in the session using SET TRUSTED ROLE
, even when the mapped role is not explicitly granted to the user.
Who Can Create a Mapping
The CREATE MAPPING
statement can be executed by:
-
The database owner — if the mapping is local
-
Users with the
CHANGE_MAPPING_RULES
system privilege — if the mapping is local
CREATE MAPPING
examples
-
Enable use of Windows trusted authentication in all databases that use the current security database:
CREATE GLOBAL MAPPING TRUSTED_AUTH USING PLUGIN WIN_SSPI FROM ANY USER TO USER;
-
Enable RDB$ADMIN access for windows admins in the current database:
CREATE MAPPING WIN_ADMINS USING PLUGIN WIN_SSPI FROM Predefined_Group DOMAIN_ANY_RID_ADMINS TO ROLE RDB$ADMIN;
The group
DOMAIN_ANY_RID_ADMINS
does not exist in Windows, but such a name would be added by theWin_Sspi
plugin to provide exact backwards compatibility. -
Enable a particular user from another database to access the current database with another name:
CREATE MAPPING FROM_RT USING PLUGIN SRP IN "rt" FROM USER U1 TO USER U2;
Database names or aliases will need to be enclosed in double quotes on operating systems that have case-sensitive file names.
-
Enable the server’s SYSDBA (from the main security database) to access the current database. (Assume that the database is using a non-default security database):
CREATE MAPPING DEF_SYSDBA USING PLUGIN SRP IN "security.db" FROM USER SYSDBA TO USER;
-
Ensure users who logged in using the legacy authentication plugin do not have too many privileges:
CREATE MAPPING LEGACY_2_GUEST USING PLUGIN legacy_auth FROM ANY USER TO USER GUEST;
14.7.3. ALTER MAPPING
Alters a mapping of a security object
DSQL
ALTER [GLOBAL] MAPPING name USING { PLUGIN plugin_name [IN database] | ANY PLUGIN [IN database | SERVERWIDE] | MAPPING [IN database] | '*' [IN database] } FROM {ANY type | type from_name} TO {USER | ROLE} [to_name]
For details on the options, see CREATE MAPPING
.
The ALTER MAPPING
statement allows you to modify any of the existing mapping options, but a local mapping cannot be changed to GLOBAL
or vice versa.
Global and local mappings of the same name are different objects. |
Who Can Alter a Mapping
The ALTER MAPPING
statement can be executed by:
-
The database owner — if the mapping is local
-
Users with the
CHANGE_MAPPING_RULES
system privilege — if the mapping is local
14.7.4. CREATE OR ALTER MAPPING
Creates a mapping of a security object if it doesn’t exist, or alters a mapping
DSQL
CREATE OR ALTER [GLOBAL] MAPPING name USING { PLUGIN plugin_name [IN database] | ANY PLUGIN [IN database | SERVERWIDE] | MAPPING [IN database] | '*' [IN database] } FROM {ANY type | type from_name} TO {USER | ROLE} [to_name]
For details on the options, see CREATE MAPPING
.
The CREATE OR ALTER MAPPING
statement creates a new or modifies an existing mapping.
Global and local mappings of the same name are different objects. |
14.7.5. DROP MAPPING
Drops a mapping of a security object
DSQL
DROP [GLOBAL] MAPPING name
Parameter | Description |
---|---|
name |
Mapping name |
The DROP MAPPING
statement removes an existing mapping.
If GLOBAL
is specified, then a global mapping will be removed.
Global and local mappings of the same name are different objects. |
14.8. Database Encryption
Firebird provides a plugin mechanism to encrypt the data stored in the database. This mechanism does not encrypt the entire database, but only data pages, index pages, and blob pages.
To make database encryption possible, you need to obtain or write a database encryption plugin.
Out of the box, Firebird does not include a database encryption plugin. The encryption plugin example in On Linux, an example plugin named |
The main problem with database encryption is how to store the secret key. Firebird provides support for transferring the key from the client, but this does not mean that storing the key on the client is the best way; it is one of several alternatives. However, keeping encryption keys on the same disk as the database is an insecure option.
For efficient separation of encryption and key access, the database encryption plugin data is divided into two parts, the encryption itself and the holder of the secret key. This can be an efficient approach when you want to use a good encryption algorithm, but you have your own custom method of storing the keys.
Once you have decided on the plugin and key-holder, you can perform the encryption.
14.8.1. Encrypting a Database
Encrypts the database using the specified encryption plugin
ALTER {DATABASE | SCHEMA} ENCRYPT WITH plugin_name [KEY key_name]
Parameter | Description |
---|---|
plugin_name |
The name of the encryption plugin |
key_name |
The name of the encryption key |
Encryption starts immediately after this statement completes, and will be performed in the background. Normal operations of the database are not disturbed during encryption.
The optional KEY
clause specifies the name of the key for the encryption plugin.
The plugin decides what to do with this key name.
The encryption process can be monitored using the For example, the following query will display the progress of the encryption process as a percentage.
|
|
14.8.2. Decrypting a Database
Decrypts the database using the configured plugin and key
ALTER {DATABASE | SCHEMA} DECRYPT
Decryption starts immediately after this statement completes, and will be performed in the background. Normal operations of the database are not disturbed during decryption.
|
14.9. SQL Security
The SQL SECURITY
clause of various DDL statements enables executable objects (triggers, stored procedures, stored functions) to be defined to run in a specific context of privileges.
The SQL Security feature has two contexts: INVOKER
and DEFINER
.
The INVOKER
context corresponds to the privileges available to the current user or the calling object, while DEFINER
corresponds to those available to the owner of the object.
The SQL SECURITY
property is an optional part of an object’s definition that can be applied to the object with DDL statements.
The property cannot be dropped, but it can be changed from INVOKER
to DEFINER
and vice versa.
This is not the same thing as SQL privileges, which are applied to users and database objects to give them various types of access to other database objects.
When an executable object in Firebird needs access to a table, view or another executable object, the target object is not accessible if the invoker does not have the necessary privileges on that object.
That is, by default all executable objects have the SQL SECURITY INVOKER
property, and any caller lacking the necessary privileges will be rejected.
The default SQL Security behaviour of a database can be overridden using ALTER DATABASE
.
If a routine has the SQL SECURITY DEFINER
property applied, the invoking user or routine will be able to execute it if the required privileges have been granted to its owner, without the need for the caller to be granted those privileges as well.
In summary:
-
If
INVOKER
is set, the access rights for executing the call to an executable object are determined by checking the current user’s active set of privileges -
If
DEFINER
is set, the access rights of the object owner will be applied instead, regardless of the current user’s active set of privileges.
15. Management Statements
Management statement are a class of SQL statements for administering aspects of the client/server environment, usually for the current session.
Typically, such statements start with the verb SET
.
The isql tool also has a collection of |
Management statements can run anywhere DSQL can run, but typically, the developer will want to run a management statement in a database trigger.
A subset of management statement can be used directly in PSQL modules without the need to wrap them in an EXECUTE STATEMENT
block.
For more details of the current set, see Management Statements in PSQL in the PSQL chapter.
Most of the management statements affect the current connection (attachment, or “session”) only, and do not require any authorization over and above the login privileges of the current user without elevated privileges.
Some management statements operate beyond the scope of the current session.
Examples are the ALTER DATABASE {BEGIN | END} BACKUP
statements to control the “copy-safe” mode, or the ALTER EXTERNAL CONNECTIONS POOL
statements to manage connection pooling.
A set of system privileges, analogous with SQL privileges granted for database objects, is provided to enable the required authority to run a specific management statement in this category.
Some management statements use the verb Although some |
Unless explicitly mentioned otherwise in an “Available in” section, management statements are available in DSQL and PSQL. Availability in ESQL is not tracked by this Language Reference.
15.1. Data Type Behaviour
15.1.1. SET BIND
(Data Type Coercion Rules)
Configures data type coercion rules for the current session
SET BIND OF <type_from> TO <type_to> <type_from> ::= <scalar_datatype> | <blob_datatype> | TIME ZONE | VARCHAR | {CHARACTER | CHAR} VARYING <type_to> ::= <scalar_datatype> | <blob_datatype> | VARCHAR | {CHARACTER | CHAR} VARYING | LEGACY | NATIVE | EXTENDED | EXTENDED TIME WITH TIME ZONE | EXTENDED TIMESTAMP WITH TIME ZONE <scalar_datatype> ::= !! See Scalar Data Types Syntax !! <blob_datatype> ::= !! See BLOB Data Types Syntax !!
This statement makes it possible to substitute one data type with another when performing client-server interactions. In other words, type_from returned by the engine is represented as type_to in the client API.
Only fields returned by the database engine in regular messages are substituted according to these rules.
Variables returned as an array slice are not affected by the SET BIND
statement.
When a partial type definition is used (e.g. CHAR
instead of CHAR(n)
) in from_type, the coercion is performed for all CHAR
columns.
The special partial type TIME ZONE
stands for TIME WITH TIME ZONE
and TIMESTAMP WITH TIME ZONE
.
When a partial type definition is used in to_type, the engine defines missing details about that type automatically based on source column.
Changing the binding of any NUMERIC
or DECIMAL
data type does not affect the underlying integer type.
In contrast, changing the binding of an integer data type also affects appropriate NUMERIC
and DECIMAL
types.
For example, SET BIND OF INT128 TO DOUBLE PRECISION
will also map NUMERIC
and DECIMAL
with precision 19 or higher, as these types use INT128
as their underlying type.
The special type LEGACY
is used when a data type, missing in previous Firebird version, should be represented in a way, understandable by old client software (possibly with data loss).
The coercion rules applied in this case are shown in the table below.
Native data type | Legacy data type |
---|---|
BOOLEAN |
CHAR(5) |
DECFLOAT |
DOUBLE PRECISION |
INT128 |
BIGINT |
TIME WITH TIME ZONE |
TIME WITHOUT TIME ZONE |
TIMESTAMP WITH TIME ZONE |
TIMESTAMP WITHOUT TIME ZONE |
Using EXTENDED
for type_to causes the engine to coerce to an extended form of the type_from data type.
Currently, this works only for TIME/TIMESTAMP WITH TIME ZONE
, they are coerced to EXTENDED TIME/TIMESTAMP WITH TIME ZONE
.
The EXTENDED
type contains both the time zone name, and the corresponding GMT offset, so it remains usable if the client application cannot process named time zones properly (e.g. due to the missing ICU library).
Setting a binding to NATIVE
resets the existing coercion rule for this data type and returns it in its native format.
The initial bind rules of a connection be configured through the DPB by providing a semicolon separated list of <type_from> TO <type_to>
options as the string value of isc_dpb_set_bind.
Execution of ALTER SESSION RESET
will revert to the binding rules configured through the DPB, or otherwise the system default.
It is also possible to configure a default set of data type coercion rules for all clients through the
See the Native to |
SET BIND
Examples
-- native
SELECT CAST('123.45' AS DECFLOAT(16)) FROM RDB$DATABASE;
CAST
=======================
123.45
-- double
SET BIND OF DECFLOAT TO DOUBLE PRECISION;
SELECT CAST('123.45' AS DECFLOAT(16)) FROM RDB$DATABASE;
CAST
=======================
123.4500000000000
-- still double
SET BIND OF DECFLOAT(34) TO CHAR;
SELECT CAST('123.45' AS DECFLOAT(16)) FROM RDB$DATABASE;
CAST
=======================
123.4500000000000
-- text
SELECT CAST('123.45' AS DECFLOAT(34)) FROM RDB$DATABASE;
CAST
==========================================
123.45
In the case of missing ICU on the client side:
SELECT CURRENT_TIMESTAMP FROM RDB$DATABASE;
CURRENT_TIMESTAMP
=========================================================
2020-02-21 16:26:48.0230 GMT*
SET BIND OF TIME ZONE TO EXTENDED;
SELECT CURRENT_TIMESTAMP FROM RDB$DATABASE;
CURRENT_TIMESTAMP
=========================================================
2020-02-21 19:26:55.6820 +03:00
15.1.2. SET DECFLOAT
Configures DECFLOAT
rounding and error behaviour for the current session
SET DECFLOAT { ROUND <round_mode> | TRAPS TO [<trap_opt> [, <trap_opt> ...]] } <round_mode> ::= CEILING | UP | HALF_UP | HALF_EVEN | HALF_DOWN | DOWN | FLOOR | REROUND <trap_opt> ::= DIVISON_BY_ZERO | INEXACT | INVALID_OPERATION | OVERFLOW | UNDERFLOW
SET DECFLOAT ROUND
SET DECFLOAT ROUND
changes the rounding behaviour of operations on DECFLOAT
.
The default rounding mode is HALF_UP
.
The initial configuration of a connection can also be specified using the DPB tag isc_dpb_decfloat_round
with the desired round_mode as string value.
The valid rounding modes are:
CEILING
|
towards +infinity |
UP
|
away from 0 |
HALF_UP
|
to nearest, if equidistant, then up (default) |
HALF_EVEN
|
to nearest, if equidistant, ensure last digit in the result will be even |
HALF_DOWN
|
to nearest, if equidistant, then down |
DOWN
|
towards 0 |
FLOOR
|
towards -infinity |
REROUND
|
up if digit to be rounded is 0 or 5, down in other cases |
The current value for the connection can be found using RDB$GET_CONTEXT('SYSTEM', 'DECFLOAT_ROUND')
.
Execution of ALTER SESSION RESET
will revert to the value configured through the DPB, or otherwise the system default.
SET DECFLOAT TRAPS
SET DECFLOAT TRAPS
changes the error behaviour of operations on DECFLOAT
.
The default traps are DIVISION_BY_ZERO,INVALID_OPERATION,OVERFLOW
;
this default matches the behaviour specified in the SQL standard for DECFLOAT
.
This statement controls whether certain exceptional conditions result in an error (“trap”) or alternative handling (for example, an underflow returns 0
when not set, or an overflow returns an infinity).
The initial configuration of a connection can also be specified using the DPB tag isc_dpb_decfloat_traps
with the desired comma-separated trap_opt values as a string value.
Valid trap options (exceptional conditions) are:
Division_by_zero
|
(set by default) |
Inexact
|
— |
Invalid_operation
|
(set by default) |
Overflow
|
(set by default) |
Underflow
|
— |
The current value for the connection can be found using RDB$GET_CONTEXT('SYSTEM', 'DECFLOAT_TRAPS')
.
Execution of ALTER SESSION RESET
will revert to the value configured through the DPB, or otherwise the system default.
15.2. Connections Pool Management
Management statements to manage the external connections pool.
This connection pool is part of the Firebird server and used for connections to other databases or servers from the Firebird server itself.
15.2.1. ALTER EXTERNAL CONNECTIONS POOL
Manages the external connections pool
ALTER EXTERNAL CONNECTIONS POOL { CLEAR ALL | CLEAR OLDEST | SET LIFETIME lifetime <time-unit> | SET SIZE size } <time-unit> ::= SECOND | MINUTE | HOUR
Parameter | Description |
---|---|
lifetime |
Maximum lifetime of a connection in the pool.
Minimum values is |
size |
Maximum size of the connection pool.
Range 0 - 1000.
Setting to |
When prepared it is described like a DDL statement, but its effect is immediate — it is executed immediately and to completion, without waiting for transaction commit.
This statement can be issued from any connection, and changes are applied to the in-memory instance of the pool in the current Firebird process. If the process is Firebird Classic, execution only affects the current process (current connection), and does not affect other Classic processes.
Changes made with ALTER EXTERNAL CONNECTIONS POOL
are not persistent: after a restart, Firebird will use the pool settings configured in firebird.conf
by ExtConnPoolSize
and ExtConnPoolLifeTime
.
Clauses of ALTER EXTERNAL CONNECTIONS POOL
CLEAR ALL
-
Closes all idle connections and disassociates currently active connections; they are immediately closed when unused.
CLEAR OLDEST
-
Closes expired connections
SET LIFETIME
-
Configures the maximum lifetime of an idle connection in the pool. The default value (in seconds) is set using the parameter
ExtConnPoolLifetime
infirebird.conf
. SET SIZE
-
Configures the maximum number of idle connections in the pool. The default value is set using the parameter
ExtConnPoolSize
infirebird.conf
.
How the Connection Pool Works
Every successful connection is associated with a pool, which maintains two lists — one for idle connections and one for active connections.
When a connection in the “active” list has no active requests and no active transactions, it is assumed to be “unused”.
A reset of the unused connection is attempted using an ALTER SESSION RESET
statement and,
-
if the reset succeeds (no errors occur) the connection is moved into the “idle” list;
-
if the reset fails, the connection is closed;
-
if the pool has reached its maximum size, the oldest idle connection is closed.
-
When the lifetime of an idle connection expires, it is deleted from the pool and closed.
New Connections
When the engine is asked to create a new external connection, the pool first looks for a candidate in the “idle” list. The search, which is case-sensitive, involves four parameters:
-
connection string
-
username
-
password
-
role
If a suitable connection is found, it is tested to check that it is still alive.
-
If it fails the check, it is deleted, and the search is repeated, without reporting any error to the client
-
Otherwise, the live connection is moved from the “idle” list to the “active” list and returned to the caller
-
If there are multiple suitable connections, the most recently used one is chosen
-
If there is no suitable connection, a new one is created and added to the “active” list.
15.3. Changing the Current Role
15.3.1. SET ROLE
Sets the active role of the current session
DSQL
SET ROLE {role_name | NONE}
Parameter | Description |
---|---|
role_name |
The name of the role to apply |
The SET ROLE
statement allows a user to assume a different role;
it sets the CURRENT_ROLE
context variable to role_name, if that role has been granted to the CURRENT_USER
.
For this session, the user receives the privileges granted by that role.
Any rights granted to the previous role are removed from the session.
Use NONE
instead of role_name to clear the CURRENT_ROLE
.
When the specified role does not exist or has not been explicitly granted to the user, the error “Role role_name is invalid or unavailable” is raised.
15.3.2. SET TRUSTED ROLE
Sets the active role of the current session to the trusted role
DSQL
SET TRUSTED ROLE
The SET TRUSTED ROLE
statement makes it possible to assume the role assigned to the user through a mapping rule (see Mapping of Users to Objects).
The role assigned through a mapping rule is assumed automatically on connect, if the user hasn’t specified an explicit role.
The SET TRUSTED ROLE
statement makes it possible to assume the mapped (or “trusted”) role at a later time, or to assume it again after the current role was changed using SET ROLE
.
A trusted role is not a specific type of role, but can be any role that was created using CREATE ROLE
, or a predefined system role such as RDB$ADMIN
.
An attachment (session) has a trusted role when the security objects mapping subsystem finds a match between the authentication result passed from the plugin and a local or global mapping to a role for the current database.
The role may be one that is not granted explicitly to that user.
When a session has no trusted role, executing SET TRUSTED ROLE
will raise error “Your attachment has no trusted role”.
While the CURRENT_ROLE
can be changed using SET ROLE
, it is not always possible to revert to a trusted role using the same command, because SET ROLE
checks if the role has been granted to the user.
With SET TRUSTED ROLE
, the trusted role can be assumed again even when SET ROLE
fails.
SET TRUSTED ROLE
Examples
-
Assuming a mapping rule that assigns the role
ROLE1
to a userALEX
:CONNECT 'employee' USER ALEX PASSWORD 'password'; SELECT CURRENT_ROLE FROM RDB$DATABASE; ROLE =============================== ROLE1 SET ROLE ROLE2; SELECT CURRENT_ROLE FROM RDB$DATABASE; ROLE =============================== ROLE2 SET TRUSTED ROLE; SELECT CURRENT_ROLE FROM RDB$DATABASE; ROLE =============================== ROLE1
15.4. Session Timeouts
Statements for management of timeouts of the current connection.
15.4.1. SET SESSION IDLE TIMEOUT
Sets the session idle timeout
SET SESSION IDLE TIMEOUT value [<time-unit>] <time-unit> ::= MINUTE | HOUR | SECOND
Parameter | Description |
---|---|
value |
The timeout duration expressed in time-unit.
A value of |
time-unit |
Time unit of the timeout.
Defaults to |
The SET SESSION IDLE TIMEOUT
sets an idle timeout at connection level and takes effect immediately.
The statement can run outside transaction control (without an active transaction).
Setting a value larger than configured for the database is allowed, but is effectively ignored, see also Determining the Timeout that is In Effect.
The current timeout set for the session can be retrieved through RDB$GET_CONTEXT
, namespace SYSTEM
and variable SESSION_IDLE_TIMEOUT
.
Information is also available from MON$ATTACHMENTS
:
MON$IDLE_TIMEOUT
-
Connection-level idle timeout in seconds;
0
if timeout is not set. MON$IDLE_TIMER
-
Idle timer expiration time; contains
NULL
if an idle timeout was not set, or if a timer is not running.
Both RDB$GET_CONTEXT('SYSTEM', 'SESSION_IDLE_TIMEOUT')
and MON$ATTACHMENTS.MON$IDLE_TIMEOUT
report the idle timeout configured for the connection;
they do not report the effective idle timeout.
The session idle timeout is reset when ALTER SESSION RESET
is executed.
Idle Session Timeouts
An idle session timeout allows a use connection to close automatically after a specified period of inactivity. A database administrator can use it to enforce closure of old connections that have become inactive, to reduce unnecessary consumption of resources. It can also be used by application and tools developers as an alternative to writing their own modules for controlling connection lifetime.
By default, the idle timeout is not enabled. No minimum or maximum limit is imposed, but a reasonably large period — such as a few hours — is recommended.
How the Idle Session Timeout Works
-
When the user API call leaves the engine (returns to the calling connection) a special idle timer associated with the current connection is started
-
When another user API call from that connection enters the engine, the idle timer is stopped and reset to zero
-
If the maximum idle time is exceeded, the engine immediately closes the connection in the same way as with asynchronous connection cancellation:
-
all active statements and cursors are closed
-
all active transactions are rolled back
-
The network connection remains open at this point, allowing the client application to get the exact error code on the next API call. The network connection will be closed on the server side, after an error is reported or in due course as a result of a network timeout from a client-side disconnection.
-
Whenever a connection is cancelled, the next user API call returns the error isc_att_shutdown
with a secondary error specifying the exact reason:
isc_att_shut_idle
-
Idle timeout expired
isc_att_shut_killed
-
Killed by database administrator
isc_att_shut_db_down
-
Database is shut down
isc_att_shut_engine
-
Engine is shut down
Setting the Idle Session Timeout
The idle timer will not start if the timeout period is set to zero. |
An idle session timeout can be set:
-
At database level, the database administrator can set the configuration parameter
ConnectionIdleTimeout
, an integer value in minutes. The default value of zero means no timeout is set. It is configurable per-database, so it may be set globally infirebird.conf
and overridden for individual databases indatabases.conf
as required.The scope of this method is all user connections, except system connections (garbage collector, cache writer, etc.).
-
at connection level, the idle session timeout is supported by both the
SET SESSION IDLE TIMEOUT
statement and the API (setIdleTimeout
). The scope of this method is specific to the supplied connection (attachment). Its value in the API is in seconds. In the SQL syntax it can be hours, minutes or seconds. Scope for this method is the connection to which it is applied.
For more information about the API calls, consult the Firebird 4.0 Release Notes.
Determining the Timeout that is In Effect
The effective idle timeout value is determined whenever a user API call leaves the engine, checking first at connection level and then at database level. A connection-level timeout can override the value of a database-level setting, as long as the period of time for the connection-level setting is no longer than any non-zero timeout that is applicable at database level.
Take note of the difference between the time units at each level.
At database level, in the configuration files, the unit for Absolute precision is not guaranteed in any case, especially when the system load is high, but timeouts are guaranteed not to expire earlier than the moment specified. |
15.4.2. SET STATEMENT TIMEOUT
Sets the statement timeout for a connection
SET STATEMENT TIMEOUT value [<time-unit>] <time-unit> ::= SECOND | MILLISECOND | MINUTE | HOUR
Parameter | Description |
---|---|
value |
The timeout duration expressed in time-unit.
A value of |
time-unit |
Time unit of the timeout.
Defaults to |
The SET STATEMENT TIMEOUT
sets a statement timeout at connection level and takes effect immediately.
The statement can run outside transaction control (without an active transaction).
Setting a value larger than configured for the database is allowed, but is effectively ignored, see also Determining the Statement Timeout that is In Effect.
The current statement timeout set for the session can be retrieved through RDB$GET_CONTEXT
, namespace SYSTEM
and variable STATEMENT_TIMEOUT
.
Information is also available from MON$ATTACHMENTS
:
MON$STATEMENT_TIMEOUT
-
Connection-level statement timeout in milliseconds;
0
if timeout is not set.
In MON$STATEMENTS
:
MON$STATEMENT_TIMEOUT
-
Statement-level statement timeout in milliseconds;
0
if timeout is not set. MON$STATEMENT_TIMER
-
Timeout timer expiration time; contains
NULL
if an idle timeout was not set, or if a timer is not running.
Both RDB$GET_CONTEXT('SYSTEM', 'STATEMENT_TIMEOUT')
and MON$ATTACHMENTS.MON$STATEMENT_TIMEOUT
report the statement timeout configured for the connection, and MON$STATEMENTS.MON$STATEMENT_TIMEOUT
for the statement;
they do not report the effective statement timeout.
The statement timeout is reset when ALTER SESSION RESET
is executed.
Statement Timeouts
The statement timeout feature allows execution of a statement to be stopped automatically when it has been running longer than a given timeout period. It gives the database administrator an instrument for limiting excessive resource consumption from heavy queries.
Statement timeouts can also be useful to application developers when creating and debugging complex queries without advance knowledge of execution time. Testers and others could find them handy for detecting long-running queries and establishing finite run times for test suites.
How the Statement Timeout Works
When the statement starts execution, or a cursor is opened, the engine starts a special timer. It is stopped when the statement completes execution, or the last record has been fetched by the cursor. A fetch does not reset this timer.
When the timeout point is reached:
-
if statement execution is active, it stops at closest possible moment
-
if statement is not active currently (between fetches, for example), it is marked as cancelled, and the next fetch will break execution and return an error
Statement types excluded from timeouts
Statement timeouts are not applicable to some types of statement and will be ignored:
|
Setting a Statement Timeout
The timer will not start if the timeout period is set to zero. |
A statement timeout can be set:
-
at database level, by the database administrator, by setting the configuration parameter
StatementTimeout
infirebird.conf
ordatabases.conf
.StatementTimeout
is an integer representing the number of seconds after which statement execution will be cancelled automatically by the engine. Zero means no timeout is set. A non-zero setting will affect all statements in all connections. -
at connection level, using
SET STATEMENT TIMEOUT
or the API for setting a statement timeout (setStatementTimeout
). A connection-level setting (via SQL or the API) affects all statements for the given connection; units for the timeout period at this level can be specified to any granularity from hours to milliseconds. -
at statement level, using the API, in milliseconds
Determining the Statement Timeout that is In Effect
The statement timeout value that is in effect is determined whenever a statement starts executing, or a cursor is opened. In searching out the timeout in effect, the engine goes up through the levels, from statement through to database and/or global levels until it finds a non-zero value. If the value in effect turns out to be zero then no statement timer is running and no timeout applies.
A statement-level or connection-level timeout can override the value of a database-level setting, as long as the period of time for the lower-level setting is no longer than any non-zero timeout that is applicable at database level.
Take note of the difference between the time units at each level.
At database level, in the conf file, the unit for Absolute precision is not guaranteed in any case, especially when the system load is high, but timeouts are guaranteed not to expire earlier than the moment specified. |
Whenever a statement times out and is cancelled, the next user API call returns the error isc_cancelled
with a secondary error specifying the exact reason, viz.,
isc_cfg_stmt_timeout
-
Config level timeout expired
isc_att_stmt_timeout
-
Attachment level timeout expired
isc_req_stmt_timeout
-
Statement level timeout expired
-
A client application could wait longer than the time set by the timeout value if the engine needs to undo a lot of actions as a result of the statement cancellation
-
When the engine runs an
EXECUTE STATEMENT
statement, it passes the remainder of the currently active timeout to the new statement. If the external (remote) engine does not support statement timeouts, the local engine silently ignores any corresponding error. -
When the engine acquires a lock from the lock manager, it tries to lower the value of the lock timeout using the remainder of the currently active statement timeout, if possible. Due to lock manager internals, any statement timeout remainder will be rounded up to whole seconds.
15.5. Time Zone Management
Statements for management of time zone features of the current connections.
15.5.1. SET TIME ZONE
Sets the session time zone
SET TIME ZONE { time_zone_string | LOCAL }
Changes the session time zone to the specified time zone.
Specifying LOCAL
will revert to initial session time zone of the session (either the default or as specified through connection property isc_dpb_session_time_zone
).
Executing ALTER SESSION RESET
has the same effect on the session time zone as SET TIME ZONE LOCAL
, but will also reset other session properties.
15.6. Optimizer Configuration
15.6.1. SET OPTIMIZE
Configures whether the optimizer should optimize for fetching first or all rows.
SET OPTIMIZE <optimize-mode> <optimize-mode> ::= FOR {FIRST | ALL} ROWS | TO DEFAULT
This feature allows the optimizer to consider another (hopefully better) plan if only a subset or rows is fetched initially by the user application (with the remaining rows being fetched on demand), thus improving the response time.
It can also be specified at the statement level using the OPTIMIZE FOR
clause.
The default behaviour can be specified globally using the OptimizeForFirstRows setting in firebird.conf or databases.conf.
15.7. Reset Session State
15.7.1. ALTER SESSION RESET
Resets the session state of the current connection to its initial values
ALTER SESSION RESET
Resetting the session can be useful for reusing the connection by a client application (for example, by a client-side connection pool). When this statement is executed, all user context variables are cleared, contents of global temporary tables are cleared, and all session-level settings are reset to their initial values.
It is possible to execute ALTER SESSION RESET
without a transaction.
Execution of ALTER SESSION RESET
performs the following steps:
-
Error isc_ses_reset_err (
335545206
) is raised if any transaction is active in the current session other than the current transaction (the one executingALTER SESSION RESET
) and two-phase transactions in the prepared state. -
System variable
RESETTING
is set to TRUE. -
ON DISCONNECT
database triggers are fired, if present and if database triggers are not disabled for the current connection. -
The current transaction (the one executing
ALTER SESSION RESET
), if present, is rolled back. A warning is reported if this transaction modified data before resetting the session. -
Session configuration is reset to their initial values. This includes, but is not limited to:
-
DECFLOAT
parameters (TRAP
andROUND
) are reset to the initial values defined using the DPB at connect time, or otherwise the system default. -
Session and statement timeouts are reset to zero.
-
The current role is restored to the initial value defined using DPB at connect time, and — if the role changed — the security classes cache is cleared.
-
The session time zone is reset to the initial value defined using the DPB at connect time, or otherwise the system default.
-
The bind configuration is reset to the initial value defined using the DPB at connect time, or otherwise the database or system default.
-
In general, configuration values should revert to the values configured using the DPB at connect time, or otherwise the database or system default.
-
-
Context variables defined for the
USER_SESSION
namespace are removed (USER_TRANSACTION
was cleared earlier by the transaction roll back). -
Global temporary tables defined as
ON COMMIT PRESERVE ROWS
are truncated (their contents is cleared). -
ON CONNECT
database triggers are fired, if present and if database triggers are not disabled for the current connection. -
A new transaction is implicitly started with the same parameters as the transaction that was rolled back (if there was a transaction)
-
System variable
RESETTING
is set to FALSE.
|
Error Handling
Any error raised by ON DISCONNECT
triggers aborts the session reset and leaves the session state unchanged.
Such errors are reported using primary error code isc_session_reset_err (335545206
) and error text "Cannot reset user session".
Any error raised after ON DISCONNECT
triggers (including the ones raised by ON CONNECT
triggers) aborts both the session reset and the connection itself.
Such errors are reported using primary error code isc_ses_reset_failed (335545272
) and error text "Reset of user session failed. Connection is shut down.".
Subsequent operations on the connection (except detach) will fail with error isc_att_shutdown (335544856
).
15.8. Debugging
15.8.1. SET DEBUG OPTION
Sets debug options
SET DEBUG OPTION option-name = value
Option name | Value Type | Description |
---|---|---|
|
|
Stores statement BLR for retrieval with |
SET DEBUG OPTION
configures debug information for the current connection.
Debug options are closely tied to engine internals and their usage is discouraged if you do not understand how these internals are subject to change between versions. |
Appendix A: Supplementary Information
In this Appendix are topics that developers may wish to refer to, to enhance understanding of features or changes.
The RDB$VALID_BLR
Field
The field RDB$VALID_BLR
in system tables RDB$PROCEDURES
, RDB$FUNCTIONS
and RDB$TRIGGERS
signal possible invalidation of a PSQL module after alteration of a domain or table column on which the module depends.
RDB$VALID_BLR
is set to 0 for any procedure or trigger whose code is made invalid by such a change.
The field RDB$VALID_BODY_FLAG
in RDB$PACKAGES
serves a similar purpose for packages.
How Invalidation Works
In PSQL modules, dependencies arise on the definitions of table columns accessed and also on any parameter or variable that has been defined in the module using the TYPE OF
clause.
After the engine has altered any domain, including the implicit domains created internally behind column definitions and output parameters, the engine internally recompiles all of its dependencies.
Any module that fails to recompile because of an incompatibility arising from a domain change is marked as invalid (“invalidated” by setting the RDB$VALID_BLR
in its system record (in RDB$PROCEDURES
, RDB$FUNCTIONS
or RDB$TRIGGERS
, as appropriate) to zero.
Revalidation (setting RDB$VALID_BLR
to 1) occurs when
-
the domain is altered again and the new definition is compatible with the previously invalidated module definition, or
-
the previously invalidated module is altered to match the new domain definition
The following query will find the modules that depend on a specific domain and report the state of their RDB$VALID_BLR
fields:
SELECT * FROM (
SELECT
'Procedure',
rdb$procedure_name,
rdb$valid_blr
FROM rdb$procedures
UNION ALL
SELECT
'Function',
rdb$function_name,
rdb$valid_blr
FROM rdb$functions
UNION ALL
SELECT
'Trigger',
rdb$trigger_name,
rdb$valid_blr
FROM rdb$triggers
) (type, name, valid)
WHERE EXISTS
(SELECT * from rdb$dependencies
WHERE rdb$dependent_name = name
AND rdb$depended_on_name = 'MYDOMAIN')
/* Replace MYDOMAIN with the actual domain name.
Use all-caps if the domain was created
case-insensitively. Otherwise, use the exact
capitalisation. */
The following query will find the modules that depend on a specific table column and report the state of their RDB$VALID_BLR
fields:
SELECT * FROM (
SELECT
'Procedure',
rdb$procedure_name,
rdb$valid_blr
FROM rdb$procedures
UNION ALL
SELECT
'Function',
rdb$function_name,
rdb$valid_blr
FROM rdb$functions
UNION ALL
SELECT
'Trigger',
rdb$trigger_name,
rdb$valid_blr
FROM rdb$triggers) (type, name, valid)
WHERE EXISTS
(SELECT *
FROM rdb$dependencies
WHERE rdb$dependent_name = name
AND rdb$depended_on_name = 'MYTABLE'
AND rdb$field_name = 'MYCOLUMN')
All PSQL invalidations caused by domain/column changes are reflected in the
|
-
For PSQL modules inherited from earlier Firebird versions (including a number of system triggers, even if the database was created under Firebird 2.1 or higher),
RDB$VALID_BLR
is NULL. This does not imply that their BLR is invalid. -
The isql commands
SHOW PROCEDURES
andSHOW TRIGGERS
display an asterisk in theRDB$VALID_BLR
column for any module for which the value is zero (i.e. invalid). However,SHOW PROCEDURE <procname>
andSHOW TRIGGER <trigname>
, which display individual PSQL modules, do not signal invalid BLR.
A Note on Equality
This note about equality and inequality operators applies everywhere in Firebird’s SQL language. |
The “=
” operator, which is explicitly used in many conditions, only matches values to values.
According to the SQL standard, NULL
is not a value and hence two NULL
s are neither equal nor unequal to one another.
If you need NULL
s to match each other in a condition, use the IS NOT DISTINCT FROM
operator.
This operator returns true if the operands have the same value or if they are both NULL
.
select *
from A join B
on A.id is not distinct from B.code
Likewise, in cases where you want to test against NULL
for a condition of inequality, use IS DISTINCT FROM
, not “<>
”.
If you want NULL
to be considered different from any value and two NULL
s to be considered equal:
select *
from A join B
on A.id is distinct from B.code
Appendix B: Exception Codes and Messages
This appendix includes:
Custom Exceptions
You can create custom exceptions for use in PSQL modules, with message text of up to 1,021 characters.
For more information, see |
The Firebird SQLCODE error codes do not correlate with the standards-compliant SQLSTATE codes. SQLCODE has been used for many years and should be considered as deprecated now. Support for SQLCODE is likely to be dropped in a future version.
SQLSTATE Error Codes and Descriptions
This table provides the error codes and message texts for the SQLSTATE
context variables.
The structure of an SQLSTATE error code is five characters comprising the SQL error class (2 characters) and the SQL subclass (3 characters).
Although Firebird tries to use SQLSTATE codes defined in ISO/IEC 9075 (the SQL standard), some are non-standard or derive from older standards like X/Open SQL for historic reasons. |
SQLSTATE | Mapped Message |
---|---|
SQLCLASS 00 (Success) |
|
00000 |
Success |
SQLCLASS 01 (Warning) |
|
01000 |
General warning |
01001 |
Cursor operation conflict |
01002 |
Disconnect error |
01003 |
NULL value eliminated in set function |
01004 |
String data, right-truncated |
01005 |
Insufficient item descriptor areas |
01006 |
Privilege not revoked |
01007 |
Privilege not granted |
01008 |
Implicit zero-bit padding |
01100 |
Statement reset to unprepared |
01101 |
Ongoing transaction has been committed |
01102 |
Ongoing transaction has been rolled back |
SQLCLASS 02 (No Data) |
|
02000 |
No data found or no rows affected |
SQLCLASS 07 (Dynamic SQL error) |
|
07000 |
Dynamic SQL error |
07001 |
Wrong number of input parameters |
07002 |
Wrong number of output parameters |
07003 |
Cursor specification cannot be executed |
07004 |
USING clause required for dynamic parameters |
07005 |
Prepared statement not a cursor-specification |
07006 |
Restricted data type attribute violation |
07007 |
USING clause required for result fields |
07008 |
Invalid descriptor count |
07009 |
Invalid descriptor index |
SQLCLASS 08 (Connection Exception) |
|
08001 |
Client unable to establish connection |
08002 |
Connection name in use |
08003 |
Connection does not exist |
08004 |
Server rejected the connection |
08006 |
Connection failure |
08007 |
Transaction resolution unknown |
SQLCLASS 0A (Feature Not Supported) |
|
0A000 |
Feature Not Supported |
SQLCLASS 0B (Invalid Transaction Initiation) |
|
0B000 |
Invalid transaction initiation |
SQLCLASS 0L (Invalid Grantor) |
|
0L000 |
Invalid grantor |
SQLCLASS 0P (Invalid Role Specification) |
|
0P000 |
Invalid role specification |
SQLCLASS 0U (Attempt to Assign to Non-Updatable Column) |
|
0U000 |
Attempt to assign to non-updatable column |
SQLCLASS 0V (Attempt to Assign to Ordering Column) |
|
0V000 |
Attempt to assign to Ordering column |
SQLCLASS 20 (Case Not Found For Case Statement) |
|
20000 |
Case not found for case statement |
SQLCLASS 21 (Cardinality Violation) |
|
21000 |
Cardinality violation |
21S01 |
Insert value list does not match column list |
21S02 |
Degree of derived table does not match column list |
SQLCLASS 22 (Data Exception) |
|
22000 |
Data exception |
22001 |
String data, right truncation |
22002 |
Null value, no indicator parameter |
22003 |
Numeric value out of range |
22004 |
Null value not allowed |
22005 |
Error in assignment |
22006 |
Null value in field reference |
22007 |
Invalid datetime format |
22008 |
Datetime field overflow |
22009 |
Invalid time zone displacement value |
2200A |
Null value in reference target |
2200B |
Escape character conflict |
2200C |
Invalid use of escape character |
2200D |
Invalid escape octet |
2200E |
Null value in array target |
2200F |
Zero-length character string |
2200G |
Most specific type mismatch |
22010 |
Invalid indicator parameter value |
22011 |
Substring error |
22012 |
Division by zero |
22014 |
Invalid update value |
22015 |
Interval field overflow |
22018 |
Invalid character value for cast |
22019 |
Invalid escape character |
2201B |
Invalid regular expression |
2201C |
Null row not permitted in table |
22012 |
Division by zero |
22020 |
Invalid limit value |
22021 |
Character not in repertoire |
22022 |
Indicator overflow |
22023 |
Invalid parameter value |
22024 |
Character string not properly terminated |
22025 |
Invalid escape sequence |
22026 |
String data, length mismatch |
22027 |
Trim error |
22028 |
Row already exists |
2202D |
Null instance used in mutator function |
2202E |
Array element error |
2202F |
Array data, right truncation |
SQLCLASS 23 (Integrity Constraint Violation) |
|
23000 |
Integrity constraint violation |
SQLCLASS 24 (Invalid Cursor State) |
|
24000 |
Invalid cursor state |
24504 |
The cursor identified in the UPDATE, DELETE, SET, or GET statement is not positioned on a row |
SQLCLASS 25 (Invalid Transaction State) |
|
25000 |
Invalid transaction state |
25S01 |
Transaction state |
25S02 |
Transaction is still active |
25S03 |
Transaction is rolled back |
SQLCLASS 26 (Invalid SQL Statement Name) |
|
26000 |
Invalid SQL statement name |
SQLCLASS 27 (Triggered Data Change Violation) |
|
27000 |
Triggered data change violation |
SQLCLASS 28 (Invalid Authorization Specification) |
|
28000 |
Invalid authorization specification |
SQLCLASS 2B (Dependent Privilege Descriptors Still Exist) |
|
2B000 |
Dependent privilege descriptors still exist |
SQLCLASS 2C (Invalid Character Set Name) |
|
2C000 |
Invalid character set name |
SQLCLASS 2D (Invalid Transaction Termination) |
|
2D000 |
Invalid transaction termination |
SQLCLASS 2E (Invalid Connection Name) |
|
2E000 |
Invalid connection name |
SQLCLASS 2F (SQL Routine Exception) |
|
2F000 |
SQL routine exception |
2F002 |
Modifying SQL-data not permitted |
2F003 |
Prohibited SQL-statement attempted |
2F004 |
Reading SQL-data not permitted |
2F005 |
Function executed no return statement |
SQLCLASS 33 (Invalid SQL Descriptor Name) |
|
33000 |
Invalid SQL descriptor name |
SQLCLASS 34 (Invalid Cursor Name) |
|
34000 |
Invalid cursor name |
SQLCLASS 35 (Invalid Condition Number) |
|
35000 |
Invalid condition number |
SQLCLASS 36 (Cursor Sensitivity Exception) |
|
36001 |
Request rejected |
36002 |
Request failed |
SQLCLASS 37 (Invalid Identifier) |
|
37000 |
Invalid identifier |
37001 |
Identifier too long |
SQLCLASS 38 (External Routine Exception) |
|
38000 |
External routine exception |
SQLCLASS 39 (External Routine Invocation Exception) |
|
39000 |
External routine invocation exception |
SQLCLASS 3B (Invalid Save Point) |
|
3B000 |
Invalid save point |
SQLCLASS 3C (Ambiguous Cursor Name) |
|
3C000 |
Ambiguous cursor name |
SQLCLASS 3D (Invalid Catalog Name) |
|
3D000 |
Invalid catalog name |
3D001 |
Catalog name not found |
SQLCLASS 3F (Invalid Schema Name) |
|
3F000 |
Invalid schema name |
SQLCLASS 40 (Transaction Rollback) |
|
40000 |
Ongoing transaction has been rolled back |
40001 |
Serialization failure |
40002 |
Transaction integrity constraint violation |
40003 |
Statement completion unknown |
SQLCLASS 42 (Syntax Error or Access Violation) |
|
42000 |
Syntax error or access violation |
42702 |
Ambiguous column reference |
42725 |
Ambiguous function reference |
42818 |
The operands of an operator or function are not compatible |
42S01 |
Base table or view already exists |
42S02 |
Base table or view not found |
42S11 |
Index already exists |
42S12 |
Index not found |
42S21 |
Column already exists |
42S22 |
Column not found |
SQLCLASS 44 (With Check Option Violation) |
|
44000 |
WITH CHECK OPTION Violation |
SQLCLASS 45 (Unhandled User-defined Exception) |
|
45000 |
Unhandled user-defined exception |
SQLCLASS 54 (Program Limit Exceeded) |
|
54000 |
Program limit exceeded |
54001 |
Statement too complex |
54011 |
Too many columns |
54023 |
Too many arguments |
SQLCLASS HY (CLI-specific Condition) |
|
HY000 |
CLI-specific condition |
HY001 |
Memory allocation error |
HY003 |
Invalid data type in application descriptor |
HY004 |
Invalid data type |
HY007 |
Associated statement is not prepared |
HY008 |
Operation canceled |
HY009 |
Invalid use of null pointer |
HY010 |
Function sequence error |
HY011 |
Attribute cannot be set now |
HY012 |
Invalid transaction operation code |
HY013 |
Memory management error |
HY014 |
Limit on the number of handles exceeded |
HY015 |
No cursor name available |
HY016 |
Cannot modify an implementation row descriptor |
HY017 |
Invalid use of an automatically allocated descriptor handle |
HY018 |
Server declined the cancellation request |
HY019 |
Non-string data cannot be sent in pieces |
HY020 |
Attempt to concatenate a null value |
HY021 |
Inconsistent descriptor information |
HY024 |
Invalid attribute value |
HY055 |
Non-string data cannot be used with string routine |
HY090 |
Invalid string length or buffer length |
HY091 |
Invalid descriptor field identifier |
HY092 |
Invalid attribute identifier |
HY095 |
Invalid Function ID specified |
HY096 |
Invalid information type |
HY097 |
Column type out of range |
HY098 |
Scope out of range |
HY099 |
Nullable type out of range |
HY100 |
Uniqueness option type out of range |
HY101 |
Accuracy option type out of range |
HY103 |
Invalid retrieval code |
HY104 |
Invalid Length/Precision value |
HY105 |
Invalid parameter type |
HY106 |
Invalid fetch orientation |
HY107 |
Row value out of range |
HY109 |
Invalid cursor position |
HY110 |
Invalid driver completion |
HY111 |
Invalid bookmark value |
HYC00 |
Optional feature not implemented |
HYT00 |
Timeout expired |
HYT01 |
Connection timeout expired |
SQLCLASS XX (Internal Error) |
|
XX000 |
Internal error |
XX001 |
Data corrupted |
XX002 |
Index corrupted |
SQLCODE and GDSCODE Error Codes and Descriptions
The table provides the SQLCODE groupings, the numeric and symbolic values for the GDSCODE errors and the message texts.
SQLCODE has been used for many years and should be considered as deprecated now. Support for SQLCODE is likely to be dropped in a future version. |
SQLÂCODE | GDSCODE | Symbol | Message Text |
---|---|---|---|
501 |
335544802 |
dialect_reset_warning |
Database dialect being changed from 3 to 1 |
304 |
335545266 |
truncate_warn |
String truncated warning due to the following reason |
304 |
335545267 |
truncate_monitor |
Monitoring data does not fit into the field |
304 |
335545268 |
truncate_context |
Engine data does not fit into return value of system function |
301 |
335544808 |
dtype_renamed |
DATE data type is now called TIMESTAMP |
301 |
336003076 |
dsql_dialect_warning_expr |
Use of @1 expression that returns different results in dialect 1 and dialect 3 |
301 |
336003080 |
dsql_warning_number_ambiguous |
WARNING: Numeric literal @1 is interpreted as a floating-point |
301 |
336003081 |
dsql_warning_number_ambiguous1 |
value in SQL dialect 1, but as an exact numeric value in SQL dialect 3. |
301 |
336003082 |
dsql_warn_precision_ambiguous |
WARNING: NUMERIC and DECIMAL fields with precision 10 or greater are stored |
301 |
336003083 |
dsql_warn_precision_ambiguous1 |
as approximate floating-point values in SQL dialect 1, but as 64-bit |
301 |
336003084 |
dsql_warn_precision_ambiguous2 |
integers in SQL dialect 3. |
300 |
335544807 |
sqlwarn |
SQL warning code = @1 |
106 |
336068855 |
dyn_miss_priv_warning |
Warning: @1 on @2 is not granted to @3. |
101 |
335544366 |
segment |
segment buffer length shorter than expected |
100 |
335544338 |
from_no_match |
no match for first value expression |
100 |
335544354 |
no_record |
invalid database key |
100 |
335544367 |
segstr_eof |
attempted retrieval of more segments than exist |
0 |
335544875 |
bad_debug_format |
Bad debug info format |
0 |
335544931 |
montabexh |
Monitoring table space exhausted |
0 |
336068743 |
dyn_dup_procedure |
Procedure @1 already exists |
0 |
336068819 |
dyn_virmemexh |
unable to allocate memory from the operating system |
0 |
336068821 |
del_gen_fail |
ERASE RDB$GENERATORS failed |
0 |
336068842 |
del_coll_fail |
ERASE RDB$COLLATIONS failed |
0 |
336068860 |
dyn_locksmith_use_granted |
Only @1 or user with privilege USE_GRANTED_BY_CLAUSE can use GRANTED BY clause |
0 |
336068861 |
dyn_dup_exception |
Exception @1 already exists |
0 |
336068862 |
dyn_dup_generator |
Sequence @1 already exists |
0 |
336068876 |
dyn_dup_function |
Function @1 already exists |
0 |
336068899 |
dyn_create_user_no_password |
Password must be specified when creating user |
0 |
336068905 |
dyn_concur_alter_database |
Concurrent ALTER DATABASE is not supported |
0 |
336068906 |
dyn_incompat_alter_database |
Incompatible ALTER DATABASE clauses: '@1' and '@2' |
-84 |
335544554 |
nonsql_security_rel |
object has non-SQL security class defined |
-84 |
335544555 |
nonsql_security_fld |
column has non-SQL security class defined |
-84 |
335544668 |
dsql_procedure_use_err |
procedure @1 does not return any values |
-85 |
335544747 |
usrname_too_long |
The username entered is too long. Maximum length is 31 bytes. |
-85 |
335544748 |
password_too_long |
The password specified is too long. Maximum length is 8 bytes. |
-85 |
335544749 |
usrname_required |
A username is required for this operation. |
-85 |
335544750 |
password_required |
A password is required for this operation |
-85 |
335544751 |
bad_protocol |
The network protocol specified is invalid |
-85 |
335544752 |
dup_usrname_found |
A duplicate user name was found in the security database |
-85 |
335544753 |
usrname_not_found |
The user name specified was not found in the security database |
-85 |
335544754 |
error_adding_sec_record |
An error occurred while attempting to add the user. |
-85 |
335544755 |
error_modifying_sec_record |
An error occurred while attempting to modify the user record. |
-85 |
335544756 |
error_deleting_sec_record |
An error occurred while attempting to delete the user record. |
-85 |
335544757 |
error_updating_sec_db |
An error occurred while updating the security database. |
-103 |
335544571 |
dsql_constant_err |
Data type for constant unknown |
-104 |
335544343 |
invalid_blr |
invalid request BLR at offset @1 |
-104 |
335544390 |
syntaxerr |
BLR syntax error: expected @1 at offset @2, encountered @3 |
-104 |
335544425 |
ctxinuse |
context already in use (BLR error) |
-104 |
335544426 |
ctxnotdef |
context not defined (BLR error) |
-104 |
335544429 |
badparnum |
undefined parameter number |
-104 |
335544440 |
bad_msg_vec |
|
-104 |
335544456 |
invalid_sdl |
invalid slice description language at offset @1 |
-104 |
335544570 |
dsql_command_err |
Invalid command |
-104 |
335544579 |
dsql_internal_err |
Internal error |
-104 |
335544590 |
dsql_dup_option |
Option specified more than once |
-104 |
335544591 |
dsql_tran_err |
Unknown transaction option |
-104 |
335544592 |
dsql_invalid_array |
Invalid array reference |
-104 |
335544608 |
command_end_err |
Unexpected end of command |
-104 |
335544612 |
token_err |
Token unknown |
-104 |
335544634 |
dsql_token_unk_err |
Token unknown - line @1, column @2 |
-104 |
335544709 |
dsql_agg_ref_err |
Invalid aggregate reference |
-104 |
335544714 |
invalid_array_id |
invalid blob id |
-104 |
335544730 |
cse_not_supported |
Client/Server Express not supported in this release |
-104 |
335544743 |
token_too_long |
token size exceeds limit |
-104 |
335544763 |
invalid_string_constant |
a string constant is delimited by double quotes |
-104 |
335544764 |
transitional_date |
DATE must be changed to TIMESTAMP |
-104 |
335544796 |
sql_dialect_datatype_unsupport |
Client SQL dialect @1 does not support reference to @2 datatype |
-104 |
335544798 |
depend_on_uncommitted_rel |
You created an indirect dependency on uncommitted metadata. You must roll back the current transaction. |
-104 |
335544821 |
dsql_column_pos_err |
Invalid column position used in the @1 clause |
-104 |
335544822 |
dsql_agg_where_err |
Cannot use an aggregate or window function in a WHERE clause, use HAVING (for aggregate only) instead |
-104 |
335544823 |
dsql_agg_group_err |
Cannot use an aggregate or window function in a GROUP BY clause |
-104 |
335544824 |
dsql_agg_column_err |
Invalid expression in the @1 (not contained in either an aggregate function or the GROUP BY clause) |
-104 |
335544825 |
dsql_agg_having_err |
Invalid expression in the @1 (neither an aggregate function nor a part of the GROUP BY clause) |
-104 |
335544826 |
dsql_agg_nested_err |
Nested aggregate and window functions are not allowed |
-104 |
335544849 |
malformed_string |
Malformed string |
-104 |
335544851 |
command_end_err2 |
Unexpected end of command - line @1, column @2 |
-104 |
335544930 |
too_big_blr |
BLR stream length @1 exceeds implementation limit @2 |
-104 |
335544980 |
internal_rejected_params |
Incorrect parameters provided to internal function @1 |
-104 |
335545022 |
cannot_copy_stmt |
Cannot copy statement @1 |
-104 |
335545023 |
invalid_boolean_usage |
Invalid usage of boolean expression |
-104 |
335545035 |
svc_no_stdin |
No isc_info_svc_stdin in user request, but service thread requested stdin data |
-104 |
335545037 |
svc_no_switches |
All services except for getting server log require switches |
-104 |
335545038 |
svc_bad_size |
Size of stdin data is more than was requested from client |
-104 |
335545039 |
no_crypt_plugin |
Crypt plugin @1 failed to load |
-104 |
335545040 |
cp_name_too_long |
Length of crypt plugin name should not exceed @1 bytes |
-104 |
335545045 |
null_spb |
NULL data with non-zero SPB length |
-104 |
335545116 |
dsql_window_incompat_frames |
If <window frame bound 1> specifies @1, then <window frame bound 2> shall not specify @2 |
-104 |
335545117 |
dsql_window_range_multi_key |
RANGE based window with <expr> {PRECEDING | FOLLOWING} cannot have ORDER BY with more than one value |
-104 |
335545118 |
dsql_window_range_inv_key_type |
RANGE based window with <offset> PRECEDING/FOLLOWING must have a single ORDER BY key of numerical, date, time or timestamp types |
-104 |
335545119 |
dsql_window_frame_value_inv_type |
Window RANGE/ROWS PRECEDING/FOLLOWING value must be of a numerical type |
-104 |
335545205 |
no_keyholder_plugin |
Key holder plugin @1 failed to load |
-104 |
336003075 |
dsql_transitional_numeric |
Precision 10 to 18 changed from DOUBLE PRECISION in SQL dialect 1 to 64-bit scaled integer in SQL dialect 3 |
-104 |
336003077 |
sql_db_dialect_dtype_unsupport |
Database SQL dialect @1 does not support reference to @2 datatype |
-104 |
336003087 |
dsql_invalid_label |
Label @1 @2 in the current scope |
-104 |
336003088 |
dsql_datatypes_not_comparable |
Datatypes @1are not comparable in expression @2 |
-104 |
336397215 |
dsql_max_sort_items |
cannot sort on more than 255 items |
-104 |
336397216 |
dsql_max_group_items |
cannot group on more than 255 items |
-104 |
336397217 |
dsql_conflicting_sort_field |
Cannot include the same field (@1.@2) twice in the ORDER BY clause with conflicting sorting options |
-104 |
336397218 |
dsql_derived_table_more_columns |
column list from derived table @1 has more columns than the number of items in its SELECT statement |
-104 |
336397219 |
dsql_derived_table_less_columns |
column list from derived table @1 has less columns than the number of items in its SELECT statement |
-104 |
336397220 |
dsql_derived_field_unnamed |
no column name specified for column number @1 in derived table @2 |
-104 |
336397221 |
dsql_derived_field_dup_name |
column @1 was specified multiple times for derived table @2 |
-104 |
336397222 |
dsql_derived_alias_select |
Internal dsql error: alias type expected by pass1_expand_select_node |
-104 |
336397223 |
dsql_derived_alias_field |
Internal dsql error: alias type expected by pass1_field |
-104 |
336397224 |
dsql_auto_field_bad_pos |
Internal dsql error: column position out of range in pass1_union_auto_cast |
-104 |
336397225 |
dsql_cte_wrong_reference |
Recursive CTE member (@1) can refer itself only in FROM clause |
-104 |
336397226 |
dsql_cte_cycle |
CTE '@1' has cyclic dependencies |
-104 |
336397227 |
dsql_cte_outer_join |
Recursive member of CTE can’t be member of an outer join |
-104 |
336397228 |
dsql_cte_mult_references |
Recursive member of CTE can’t reference itself more than once |
-104 |
336397229 |
dsql_cte_not_a_union |
Recursive CTE (@1) must be an UNION |
-104 |
336397230 |
dsql_cte_nonrecurs_after_recurs |
CTE '@1' defined non-recursive member after recursive |
-104 |
336397231 |
dsql_cte_wrong_clause |
Recursive member of CTE '@1' has @2 clause |
-104 |
336397232 |
dsql_cte_union_all |
Recursive members of CTE (@1) must be linked with another members via UNION ALL |
-104 |
336397233 |
dsql_cte_miss_nonrecursive |
Non-recursive member is missing in CTE '@1' |
-104 |
336397234 |
dsql_cte_nested_with |
WITH clause can’t be nested |
-104 |
336397235 |
dsql_col_more_than_once_using |
column @1 appears more than once in USING clause |
-104 |
336397237 |
dsql_cte_not_used |
CTE "@1" is not used in query |
-104 |
336397238 |
dsql_col_more_than_once_view |
column @1 appears more than once in ALTER VIEW |
-104 |
336397257 |
dsql_max_distinct_items |
Cannot have more than 255 items in DISTINCT / UNION DISTINCT list |
-104 |
336397321 |
dsql_cte_recursive_aggregate |
Recursive member of CTE cannot use aggregate or window function |
-104 |
336397326 |
dsql_wlock_simple |
WITH LOCK can be used only with a single physical table |
-104 |
336397327 |
dsql_firstskip_rows |
FIRST/SKIP cannot be used with OFFSET/FETCH or ROWS |
-104 |
336397328 |
dsql_wlock_aggregates |
WITH LOCK cannot be used with aggregates |
-104 |
336397329 |
dsql_wlock_conflict |
WITH LOCK cannot be used with @1 |
-105 |
335544702 |
escape_invalid |
Invalid ESCAPE sequence |
-105 |
335544789 |
extract_input_mismatch |
Specified EXTRACT part does not exist in input datatype |
-105 |
335544884 |
invalid_similar_pattern |
Invalid SIMILAR TO pattern |
-150 |
335544360 |
read_only_rel |
attempted update of read-only table |
-150 |
335544362 |
read_only_view |
cannot update read-only view @1 |
-150 |
335544446 |
non_updatable |
not updatable |
-150 |
335544546 |
constaint_on_view |
Cannot define constraints on views |
-151 |
335544359 |
read_only_field |
attempted update of read-only column @1 |
-155 |
335544658 |
dsql_base_table |
@1 is not a valid base table of the specified view |
-157 |
335544598 |
specify_field_err |
must specify column name for view select expression |
-158 |
335544599 |
num_field_err |
number of columns does not match select list |
-162 |
335544685 |
no_dbkey |
dbkey not available for multi-table views |
-170 |
335544512 |
prcmismat |
Input parameter mismatch for procedure @1 |
-170 |
335544619 |
extern_func_err |
External functions cannot have more than 10 parameters |
-170 |
335544850 |
prc_out_param_mismatch |
Output parameter mismatch for procedure @1 |
-170 |
335545101 |
fun_param_mismatch |
Input parameter mismatch for function @1 |
-171 |
335544439 |
funmismat |
function @1 could not be matched |
-171 |
335544458 |
invalid_dimension |
column not array or invalid dimensions (expected @1, encountered @2) |
-171 |
335544618 |
return_mode_err |
Return mode by value not allowed for this data type |
-171 |
335544873 |
array_max_dimensions |
Array data type can use up to @1 dimensions |
-172 |
335544438 |
funnotdef |
function @1 is not defined |
-172 |
335544932 |
modnotfound |
module name or entrypoint could not be found |
-203 |
335544708 |
dyn_fld_ambiguous |
Ambiguous column reference. |
-204 |
335544463 |
gennotdef |
generator @1 is not defined |
-204 |
335544502 |
stream_not_defined |
reference to invalid stream number |
-204 |
335544509 |
charset_not_found |
CHARACTER SET @1 is not defined |
-204 |
335544511 |
prcnotdef |
procedure @1 is not defined |
-204 |
335544515 |
codnotdef |
status code @1 unknown |
-204 |
335544516 |
xcpnotdef |
exception @1 not defined |
-204 |
335544532 |
ref_cnstrnt_notfound |
Name of Referential Constraint not defined in constraints table. |
-204 |
335544551 |
grant_obj_notfound |
could not find object for GRANT |
-204 |
335544568 |
text_subtype |
Implementation of text subtype @1 not located. |
-204 |
335544573 |
dsql_datatype_err |
Data type unknown |
-204 |
335544580 |
dsql_relation_err |
Table unknown |
-204 |
335544581 |
dsql_procedure_err |
Procedure unknown |
-204 |
335544588 |
collation_not_found |
COLLATION @1 for CHARACTER SET @2 is not defined |
-204 |
335544589 |
collation_not_for_charset |
COLLATION @1 is not valid for specified CHARACTER SET |
-204 |
335544595 |
dsql_trigger_err |
Trigger unknown |
-204 |
335544620 |
alias_conflict_err |
alias @1 conflicts with an alias in the same statement |
-204 |
335544621 |
procedure_conflict_error |
alias @1 conflicts with a procedure in the same statement |
-204 |
335544622 |
relation_conflict_err |
alias @1 conflicts with a table in the same statement |
-204 |
335544635 |
dsql_no_relation_alias |
there is no alias or table named @1 at this scope level |
-204 |
335544636 |
indexname |
there is no index @1 for table @2 |
-204 |
335544640 |
collation_requires_text |
Invalid use of CHARACTER SET or COLLATE |
-204 |
335544662 |
dsql_blob_type_unknown |
BLOB SUB_TYPE @1 is not defined |
-204 |
335544759 |
bad_default_value |
can not define a not null column with NULL as default value |
-204 |
335544760 |
invalid_clause |
invalid clause --- '@1' |
-204 |
335544800 |
too_many_contexts |
Too many Contexts of Relation/Procedure/Views. Maximum allowed is 256 |
-204 |
335544817 |
bad_limit_param |
Invalid parameter to FETCH or FIRST. Only integers >= 0 are allowed. |
-204 |
335544818 |
bad_skip_param |
Invalid parameter to OFFSET or SKIP. Only integers >= 0 are allowed. |
-204 |
335544837 |
bad_substring_offset |
Invalid offset parameter @1 to SUBSTRING. Only positive integers are allowed. |
-204 |
335544853 |
bad_substring_length |
Invalid length parameter @1 to SUBSTRING. Negative integers are not allowed. |
-204 |
335544854 |
charset_not_installed |
CHARACTER SET @1 is not installed |
-204 |
335544855 |
collation_not_installed |
COLLATION @1 for CHARACTER SET @2 is not installed |
-204 |
335544867 |
subtype_for_internal_use |
Blob sub_types bigger than 1 (text) are for internal use only |
-204 |
335545104 |
invalid_attachment_charset |
CHARACTER SET @1 cannot be used as a attachment character set |
-204 |
336003085 |
dsql_ambiguous_field_name |
Ambiguous field name between @1 and @2 |
-205 |
335544396 |
fldnotdef |
column @1 is not defined in table @2 |
-205 |
335544552 |
grant_fld_notfound |
could not find column for GRANT |
-205 |
335544883 |
fldnotdef2 |
column @1 is not defined in procedure @2 |
-206 |
335544578 |
dsql_field_err |
Column unknown |
-206 |
335544587 |
dsql_blob_err |
Column is not a BLOB |
-206 |
335544596 |
dsql_subselect_err |
Subselect illegal in this context |
-206 |
336397208 |
dsql_line_col_error |
At line @1, column @2 |
-206 |
336397209 |
dsql_unknown_pos |
At unknown line and column |
-206 |
336397210 |
dsql_no_dup_name |
Column @1 cannot be repeated in @2 statement |
-208 |
335544617 |
order_by_err |
invalid ORDER BY clause |
-219 |
335544395 |
relnotdef |
table @1 is not defined |
-219 |
335544872 |
domnotdef |
domain @1 is not defined |
-230 |
335544487 |
walw_err |
WAL Writer error |
-231 |
335544488 |
logh_small |
Log file header of @1 too small |
-232 |
335544489 |
logh_inv_version |
Invalid version of log file @1 |
-233 |
335544490 |
logh_open_flag |
Log file @1 not latest in the chain but open flag still set |
-234 |
335544491 |
logh_open_flag2 |
Log file @1 not closed properly; database recovery may be required |
-235 |
335544492 |
logh_diff_dbname |
Database name in the log file @1 is different |
-236 |
335544493 |
logf_unexpected_eof |
Unexpected end of log file @1 at offset @2 |
-237 |
335544494 |
logr_incomplete |
Incomplete log record at offset @1 in log file @2 |
-238 |
335544495 |
logr_header_small |
Log record header too small at offset @1 in log file @2 |
-239 |
335544496 |
logb_small |
Log block too small at offset @1 in log file @2 |
-239 |
335544691 |
cache_too_small |
Insufficient memory to allocate page buffer cache |
-239 |
335544693 |
log_too_small |
Log size too small |
-239 |
335544694 |
partition_too_small |
Log partition size too small |
-240 |
335544497 |
wal_illegal_attach |
Illegal attempt to attach to an uninitialized WAL segment for @1 |
-241 |
335544498 |
wal_invalid_wpb |
Invalid WAL parameter block option @1 |
-242 |
335544499 |
wal_err_rollover |
Cannot roll over to the next log file @1 |
-243 |
335544500 |
no_wal |
database does not use Write-ahead Log |
-244 |
335544503 |
wal_subsys_error |
WAL subsystem encountered error |
-245 |
335544504 |
wal_subsys_corrupt |
WAL subsystem corrupted |
-246 |
335544513 |
wal_bugcheck |
Database @1: WAL subsystem bug for pid @2 @3 |
-247 |
335544514 |
wal_cant_expand |
Could not expand the WAL segment for database @1 |
-248 |
335544521 |
wal_err_rollover2 |
Unable to roll over please see Firebird log. |
-249 |
335544522 |
wal_err_logwrite |
WAL I/O error. Please see Firebird log. |
-250 |
335544523 |
wal_err_jrn_comm |
WAL writer - Journal server communication error. Please see Firebird log. |
-251 |
335544524 |
wal_err_expansion |
WAL buffers cannot be increased. Please see Firebird log. |
-252 |
335544525 |
wal_err_setup |
WAL setup error. Please see Firebird log. |
-253 |
335544526 |
wal_err_ww_sync |
obsolete |
-254 |
335544527 |
wal_err_ww_start |
Cannot start WAL writer for the database @1 |
-255 |
335544556 |
wal_cache_err |
Write-ahead Log without shared cache configuration not allowed |
-257 |
335544566 |
start_cm_for_wal |
WAL defined; Cache Manager must be started first |
-258 |
335544567 |
wal_ovflow_log_required |
Overflow log specification required for round-robin log |
-259 |
335544629 |
wal_shadow_err |
Write-ahead Log with shadowing configuration not allowed |
-260 |
335544690 |
cache_redef |
Cache redefined |
-260 |
335544692 |
log_redef |
Log redefined |
-261 |
335544695 |
partition_not_supp |
Partitions not supported in series of log file specification |
-261 |
335544696 |
log_length_spec |
Total length of a partitioned log must be specified |
-281 |
335544637 |
no_stream_plan |
table or procedure @1 is not referenced in plan |
-281 |
335545282 |
wrong_proc_plan |
Procedures cannot specify access type other than NATURAL in the plan |
-282 |
335544638 |
stream_twice |
table or procedure @1 is referenced more than once in plan; use aliases to distinguish |
-282 |
335544643 |
dsql_self_join |
the table @1 is referenced twice; use aliases to differentiate |
-282 |
335544659 |
duplicate_base_table |
table or procedure @1 is referenced twice in view; use an alias to distinguish |
-282 |
335544660 |
view_alias |
view @1 has more than one base table; use aliases to distinguish |
-282 |
335544710 |
complex_view |
navigational stream @1 references a view with more than one base table |
-283 |
335544639 |
stream_not_found |
table or procedure @1 is referenced in the plan but not the from list |
-284 |
335544642 |
index_unused |
index @1 cannot be used in the specified plan |
-291 |
335544531 |
primary_key_notnull |
Column used in a PRIMARY constraint must be NOT NULL. |
-291 |
335545103 |
domain_primary_key_notnull |
Domain used in the PRIMARY KEY constraint of table @1 must be NOT NULL |
-292 |
335544534 |
ref_cnstrnt_update |
Cannot update constraints (RDB$REF_CONSTRAINTS). |
-293 |
335544535 |
check_cnstrnt_update |
Cannot update constraints (RDB$CHECK_CONSTRAINTS). |
-294 |
335544536 |
check_cnstrnt_del |
Cannot delete CHECK constraint entry (RDB$CHECK_CONSTRAINTS) |
-295 |
335544545 |
rel_cnstrnt_update |
Cannot update constraints (RDB$RELATION_CONSTRAINTS). |
-296 |
335544547 |
invld_cnstrnt_type |
internal Firebird consistency check (invalid RDB$CONSTRAINT_TYPE) |
-297 |
335544558 |
check_constraint |
Operation violates CHECK constraint @1 on view or table @2 |
-313 |
335544669 |
dsql_count_mismatch |
count of column list and variable list do not match |
-313 |
336003099 |
upd_ins_doesnt_match_pk |
UPDATE OR INSERT field list does not match primary key of table @1 |
-313 |
336003100 |
upd_ins_doesnt_match_matching |
UPDATE OR INSERT field list does not match MATCHING clause |
-313 |
336003111 |
dsql_wrong_param_num |
Wrong number of parameters (expected @1, got @2) |
-313 |
336003113 |
upd_ins_cannot_default |
UPDATE OR INSERT value for field @1, part of the implicit or explicit MATCHING clause, cannot be DEFAULT |
-314 |
335544565 |
transliteration_failed |
Cannot transliterate character between character sets |
-315 |
336068815 |
dyn_dtype_invalid |
Cannot change datatype for column @1. Changing datatype is not supported for BLOB or ARRAY columns. |
-383 |
336068814 |
dyn_dependency_exists |
Column @1 from table @2 is referenced in @3 |
-401 |
335544647 |
invalid_operator |
invalid comparison operator for find operation |
-402 |
335544368 |
segstr_no_op |
attempted invalid operation on a BLOB |
-402 |
335544414 |
blobnotsup |
BLOB and array data types are not supported for @1 operation |
-402 |
335544427 |
datnotsup |
data operation not supported |
-402 |
335545262 |
cannot_update_old_blob |
cannot update old BLOB |
-402 |
335545263 |
cannot_read_new_blob |
cannot read from new BLOB |
-402 |
335545283 |
invalid_blob_util_handle |
Invalid RDB$BLOB_UTIL handle |
-402 |
335545284 |
bad_temp_blob_id |
Invalid temporary BLOB ID |
-406 |
335544457 |
out_of_bounds |
subscript out of bounds |
-406 |
335545028 |
ss_out_of_bounds |
Subscript @1 out of bounds [@2, @3] |
-407 |
335544435 |
nullsegkey |
null segment of UNIQUE KEY |
-413 |
335544334 |
convert_error |
conversion error from string "@1" |
-413 |
335544454 |
nofilter |
filter not found to convert type @1 to type @2 |
-413 |
335544860 |
blob_convert_error |
Unsupported conversion to target type BLOB (subtype @1) |
-413 |
335544861 |
array_convert_error |
Unsupported conversion to target type ARRAY |
-501 |
335544577 |
dsql_cursor_close_err |
Attempt to reclose a closed cursor |
-502 |
335544574 |
dsql_decl_err |
Invalid cursor declaration |
-502 |
335544576 |
dsql_cursor_open_err |
Attempt to reopen an open cursor |
-502 |
336003090 |
dsql_cursor_redefined |
Statement already has a cursor @1 assigned |
-502 |
336003091 |
dsql_cursor_not_found |
Cursor @1 is not found in the current context |
-502 |
336003092 |
dsql_cursor_exists |
Cursor @1 already exists in the current context |
-502 |
336003093 |
dsql_cursor_rel_ambiguous |
Relation @1 is ambiguous in cursor @2 |
-502 |
336003094 |
dsql_cursor_rel_not_found |
Relation @1 is not found in cursor @2 |
-504 |
335544572 |
dsql_cursor_err |
Invalid cursor reference |
-504 |
336003089 |
dsql_cursor_invalid |
Empty cursor name is not allowed |
-504 |
336003095 |
dsql_cursor_not_open |
Cursor is not open |
-508 |
335544348 |
no_cur_rec |
no current record for fetch operation |
-510 |
335544575 |
dsql_cursor_update_err |
Cursor @1 is not updatable |
-518 |
335544582 |
dsql_request_err |
Request unknown |
-519 |
335544688 |
dsql_open_cursor_request |
The prepare statement identifies a prepare statement with an open cursor |
-530 |
335544466 |
foreign_key |
violation of FOREIGN KEY constraint "@1" on table "@2" |
-530 |
335544838 |
foreign_key_target_doesnt_exist |
Foreign key reference target does not exist |
-530 |
335544839 |
foreign_key_references_present |
Foreign key references are present for the record |
-531 |
335544597 |
dsql_crdb_prepare_err |
Cannot prepare a CREATE DATABASE/SCHEMA statement |
-532 |
335544469 |
trans_invalid |
transaction marked invalid and cannot be committed |
-532 |
335545002 |
attachment_in_use |
Attachment is in use |
-532 |
335545003 |
transaction_in_use |
Transaction is in use |
-532 |
335545017 |
async_active |
Asynchronous call is already running for this attachment |
-551 |
335544352 |
no_priv |
no permission for @1 access to @2 @3 |
-551 |
335544790 |
insufficient_svc_privileges |
Service @1 requires SYSDBA permissions. Reattach to the Service Manager using the SYSDBA account. |
-551 |
335545033 |
trunc_limits |
expected length @1, actual @2 |
-551 |
335545034 |
info_access |
Wrong info requested in isc_svc_query() for anonymous service |
-551 |
335545036 |
svc_start_failed |
Start request for anonymous service is impossible |
-551 |
335545254 |
effective_user |
Effective user is @1 |
-552 |
335544550 |
not_rel_owner |
only the owner of a table may reassign ownership |
-552 |
335544553 |
grant_nopriv |
user does not have GRANT privileges for operation |
-552 |
335544707 |
grant_nopriv_on_base |
user does not have GRANT privileges on base table/view for operation |
-552 |
335545058 |
protect_ownership |
Only the owner can change the ownership |
-553 |
335544529 |
existing_priv_mod |
cannot modify an existing user privilege |
-595 |
335544645 |
stream_crack |
the current position is on a crack |
-596 |
335544374 |
stream_eof |
attempt to fetch past the last record in a record stream |
-596 |
335544644 |
stream_bof |
attempt to fetch before the first record in a record stream |
-596 |
335545092 |
cursor_not_positioned |
Cursor @1 is not positioned in a valid record |
-597 |
335544632 |
dsql_file_length_err |
Preceding file did not specify length, so @1 must include starting page number |
-598 |
335544633 |
dsql_shadow_number_err |
Shadow number must be a positive integer |
-599 |
335544607 |
node_err |
gen.c: node not supported |
-599 |
335544625 |
node_name_err |
A node name is not permitted in a secondary, shadow, cache or log file name |
-600 |
335544680 |
crrp_data_err |
sort error: corruption in data structure |
-601 |
335544646 |
db_or_file_exists |
database or file exists |
-604 |
335544593 |
dsql_max_arr_dim_exceeded |
Array declared with too many dimensions |
-604 |
335544594 |
dsql_arr_range_error |
Illegal array dimension range |
-605 |
335544682 |
dsql_field_ref |
Inappropriate self-reference of column |
-607 |
335544351 |
no_meta_update |
unsuccessful metadata update |
-607 |
335544549 |
systrig_update |
cannot modify or erase a system trigger |
-607 |
335544657 |
dsql_no_blob_array |
Array/BLOB/DATE data types not allowed in arithmetic |
-607 |
335544746 |
reftable_requires_pk |
"REFERENCES table" without "(column)" requires PRIMARY KEY on referenced table |
-607 |
335544815 |
generator_name |
GENERATOR @1 |
-607 |
335544816 |
udf_name |
Function @1 |
-607 |
335544858 |
must_have_phys_field |
Can’t have relation with only computed fields or constraints |
-607 |
336003074 |
dsql_dbkey_from_non_table |
Cannot SELECT RDB$DB_KEY from a stored procedure. |
-607 |
336003086 |
dsql_udf_return_pos_err |
External function should have return position between 1 and @1 |
-607 |
336003096 |
dsql_type_not_supp_ext_tab |
Data type @1 is not supported for EXTERNAL TABLES. Relation '@2', field '@3' |
-607 |
336003104 |
dsql_record_version_table |
To be used with RDB$RECORD_VERSION, @1 must be a table or a view of single table |
-607 |
336068845 |
dyn_cannot_del_syscoll |
Cannot delete system collation |
-607 |
336068866 |
dyn_cannot_mod_sysproc |
Cannot ALTER or DROP system procedure @1 |
-607 |
336068867 |
dyn_cannot_mod_systrig |
Cannot ALTER or DROP system trigger @1 |
-607 |
336068868 |
dyn_cannot_mod_sysfunc |
Cannot ALTER or DROP system function @1 |
-607 |
336068869 |
dyn_invalid_ddl_proc |
Invalid DDL statement for procedure @1 |
-607 |
336068870 |
dyn_invalid_ddl_trig |
Invalid DDL statement for trigger @1 |
-607 |
336068878 |
dyn_invalid_ddl_func |
Invalid DDL statement for function @1 |
-607 |
336397206 |
dsql_table_not_found |
Table @1 does not exist |
-607 |
336397207 |
dsql_view_not_found |
View @1 does not exist |
-607 |
336397212 |
dsql_no_array_computed |
Array and BLOB data types not allowed in computed field |
-607 |
336397214 |
dsql_only_can_subscript_array |
scalar operator used on field @1 which is not an array |
-612 |
336068812 |
dyn_domain_name_exists |
Cannot rename domain @1 to @2. A domain with that name already exists. |
-612 |
336068813 |
dyn_field_name_exists |
Cannot rename column @1 to @2. A column with that name already exists in table @3. |
-615 |
335544475 |
relation_lock |
lock on table @1 conflicts with existing lock |
-615 |
335544476 |
record_lock |
requested record lock conflicts with existing lock |
-615 |
335544501 |
drop_wal |
cannot drop log file when journaling is enabled |
-615 |
335544507 |
range_in_use |
refresh range number @1 already in use |
-616 |
335544530 |
primary_key_ref |
Cannot delete PRIMARY KEY being used in FOREIGN KEY definition. |
-616 |
335544539 |
integ_index_del |
Cannot delete index used by an Integrity Constraint |
-616 |
335544540 |
integ_index_mod |
Cannot modify index used by an Integrity Constraint |
-616 |
335544541 |
check_trig_del |
Cannot delete trigger used by a CHECK Constraint |
-616 |
335544543 |
cnstrnt_fld_del |
Cannot delete column being used in an Integrity Constraint. |
-616 |
335544630 |
dependency |
there are @1 dependencies |
-616 |
335544674 |
del_last_field |
last column in a table cannot be deleted |
-616 |
335544728 |
integ_index_deactivate |
Cannot deactivate index used by an integrity constraint |
-616 |
335544729 |
integ_deactivate_primary |
Cannot deactivate index used by a PRIMARY/UNIQUE constraint |
-617 |
335544542 |
check_trig_update |
Cannot update trigger used by a CHECK Constraint |
-617 |
335544544 |
cnstrnt_fld_rename |
Cannot rename column being used in an Integrity Constraint. |
-618 |
335544537 |
integ_index_seg_del |
Cannot delete index segment used by an Integrity Constraint |
-618 |
335544538 |
integ_index_seg_mod |
Cannot update index segment used by an Integrity Constraint |
-625 |
335544347 |
not_valid |
validation error for column @1, value "@2" |
-625 |
335544879 |
not_valid_for_var |
validation error for variable @1, value "@2" |
-625 |
335544880 |
not_valid_for |
validation error for @1, value "@2" |
-637 |
335544664 |
dsql_duplicate_spec |
duplicate specification of @1 - not supported |
-637 |
336397213 |
dsql_implicit_domain_name |
Implicit domain name @1 not allowed in user created domain |
-660 |
335544533 |
foreign_key_notfound |
Non-existent PRIMARY or UNIQUE KEY specified for FOREIGN KEY. |
-660 |
335544628 |
idx_create_err |
cannot create index @1 |
-660 |
336003098 |
primary_key_required |
Primary key required on table @1 |
-663 |
335544624 |
idx_seg_err |
segment count of 0 defined for index @1 |
-663 |
335544631 |
idx_key_err |
too many keys defined for index @1 |
-663 |
335544672 |
key_field_err |
too few key columns found for index @1 (incorrect column name?) |
-664 |
335544434 |
keytoobig |
key size exceeds implementation restriction for index "@1" |
-677 |
335544445 |
ext_err |
@1 extension error |
-685 |
335544465 |
bad_segstr_type |
invalid BLOB type for operation |
-685 |
335544670 |
blob_idx_err |
attempt to index BLOB column in index @1 |
-685 |
335544671 |
array_idx_err |
attempt to index array column in index @1 |
-689 |
335544403 |
badpagtyp |
page @1 is of wrong type (expected @2, found @3) |
-689 |
335544650 |
page_type_err |
wrong page type |
-690 |
335544679 |
no_segments_err |
segments not allowed in expression index @1 |
-691 |
335544681 |
rec_size_err |
new record size of @1 bytes is too big |
-692 |
335544477 |
max_idx |
maximum indexes per table (@1) exceeded |
-693 |
335544663 |
req_max_clones_exceeded |
Too many concurrent executions of the same request |
-694 |
335544684 |
no_field_access |
cannot access column @1 in view @2 |
-802 |
335544321 |
arith_except |
arithmetic exception, numeric overflow, or string truncation |
-802 |
335544836 |
concat_overflow |
Concatenation overflow. Resulting string cannot exceed 32765 bytes in length. |
-802 |
335544914 |
string_truncation |
string right truncation |
-802 |
335544915 |
blob_truncation |
blob truncation when converting to a string: length limit exceeded |
-802 |
335544916 |
numeric_out_of_range |
numeric value is out of range |
-802 |
336003105 |
dsql_invalid_sqlda_version |
SQLDA version expected between @1 and @2, found @3 |
-802 |
336003106 |
dsql_sqlvar_index |
at SQLVAR index @1 |
-802 |
336003107 |
dsql_no_sqlind |
empty pointer to NULL indicator variable |
-802 |
336003108 |
dsql_no_sqldata |
empty pointer to data |
-802 |
336003109 |
dsql_no_input_sqlda |
No SQLDA for input values provided |
-802 |
336003110 |
dsql_no_output_sqlda |
No SQLDA for output values provided |
-803 |
335544349 |
no_dup |
attempt to store duplicate value (visible to active transactions) in unique index "@1" |
-803 |
335544665 |
unique_key_violation |
violation of PRIMARY or UNIQUE KEY constraint "@1" on table "@2" |
-804 |
335544380 |
wronumarg |
wrong number of arguments on call |
-804 |
335544583 |
dsql_sqlda_err |
SQLDA error |
-804 |
335544584 |
dsql_var_count_err |
Count of read-write columns does not equal count of values |
-804 |
335544586 |
dsql_function_err |
Function unknown |
-804 |
335544713 |
dsql_sqlda_value_err |
Incorrect values within SQLDA structure |
-804 |
335545050 |
wrong_message_length |
Message length passed from user application does not match set of columns |
-804 |
335545051 |
no_output_format |
Resultset is missing output format information |
-804 |
335545052 |
item_finish |
Message metadata not ready - item @1 is not finished |
-804 |
335545100 |
interface_version_too_old |
Interface @3 version too old: expected @1, found @2 |
-804 |
336003097 |
dsql_feature_not_supported_ods |
Feature not supported on ODS version older than @1.@2 |
-804 |
336397205 |
dsql_too_old_ods |
ODS versions before ODS@1 are not supported |
-806 |
335544600 |
col_name_err |
Only simple column names permitted for VIEW WITH CHECK OPTION |
-807 |
335544601 |
where_err |
No WHERE clause for VIEW WITH CHECK OPTION |
-808 |
335544602 |
table_view_err |
Only one table allowed for VIEW WITH CHECK OPTION |
-809 |
335544603 |
distinct_err |
DISTINCT, GROUP or HAVING not permitted for VIEW WITH CHECK OPTION |
-810 |
335544605 |
subquery_err |
No subqueries permitted for VIEW WITH CHECK OPTION |
-811 |
335544652 |
sing_select_err |
multiple rows in singleton select |
-811 |
335545269 |
merge_dup_update |
Multiple source records cannot match the same target during MERGE |
-816 |
335544651 |
ext_readonly_err |
Cannot insert because the file is readonly or is on a read only medium. |
-816 |
335544715 |
extfile_uns_op |
Operation not supported for EXTERNAL FILE table @1 |
-817 |
335544361 |
read_only_trans |
attempted update during read-only transaction |
-817 |
335544371 |
segstr_no_write |
attempted write to read-only BLOB |
-817 |
335544444 |
read_only |
operation not supported |
-817 |
335544765 |
read_only_database |
attempted update on read-only database |
-817 |
335544766 |
must_be_dialect_2_and_up |
SQL dialect @1 is not supported in this database |
-817 |
335544793 |
ddl_not_allowed_by_db_sql_dial |
Metadata update statement is not allowed by the current database SQL dialect @1 |
-817 |
336003079 |
sql_dialect_conflict_num |
DB dialect @1 and client dialect @2 conflict with respect to numeric precision @3. |
-817 |
336003101 |
upd_ins_with_complex_view |
UPDATE OR INSERT without MATCHING could not be used with views based on more than one table |
-817 |
336003102 |
dsql_incompatible_trigger_type |
Incompatible trigger type |
-817 |
336003103 |
dsql_db_trigger_type_cant_change |
Database trigger type can’t be changed |
-817 |
336003112 |
dsql_invalid_drop_ss_clause |
Invalid DROP SQL SECURITY clause |
-820 |
335544356 |
obsolete_metadata |
metadata is obsolete |
-820 |
335544379 |
wrong_ods |
unsupported on-disk structure for file @1; found @2.@3, support @4.@5 |
-820 |
335544437 |
wrodynver |
wrong DYN version |
-820 |
335544467 |
high_minor |
minor version too high found @1 expected @2 |
-820 |
335544881 |
need_difference |
Difference file name should be set explicitly for database on raw device |
-823 |
335544473 |
invalid_bookmark |
invalid bookmark handle |
-824 |
335544474 |
bad_lock_level |
invalid lock level @1 |
-825 |
335544519 |
bad_lock_handle |
invalid lock handle |
-826 |
335544585 |
dsql_stmt_handle |
Invalid statement handle |
-827 |
335544655 |
invalid_direction |
invalid direction for find operation |
-827 |
335544718 |
invalid_key |
Invalid key for find operation |
-828 |
335544678 |
inval_key_posn |
invalid key position |
-829 |
335544616 |
field_ref_err |
invalid column reference |
-829 |
336068816 |
dyn_char_fld_too_small |
New size specified for column @1 must be at least @2 characters. |
-829 |
336068817 |
dyn_invalid_dtype_conversion |
Cannot change datatype for @1. Conversion from base type @2 to @3 is not supported. |
-829 |
336068818 |
dyn_dtype_conv_invalid |
Cannot change datatype for column @1 from a character type to a non-character type. |
-829 |
336068829 |
max_coll_per_charset |
Maximum number of collations per character set exceeded |
-829 |
336068830 |
invalid_coll_attr |
Invalid collation attributes |
-829 |
336068852 |
dyn_scale_too_big |
New scale specified for column @1 must be at most @2. |
-829 |
336068853 |
dyn_precision_too_small |
New precision specified for column @1 must be at least @2. |
-829 |
336068857 |
dyn_cannot_addrem_computed |
Cannot add or remove COMPUTED from column @1 |
-830 |
335544615 |
field_aggregate_err |
column used with aggregate |
-831 |
335544548 |
primary_key_exists |
Attempt to define a second PRIMARY KEY for the same table |
-832 |
335544604 |
key_field_count_err |
FOREIGN KEY column count does not match PRIMARY KEY |
-833 |
335544606 |
expression_eval_err |
expression evaluation not supported |
-833 |
335544810 |
date_range_exceeded |
value exceeds the range for valid dates |
-833 |
335544912 |
time_range_exceeded |
value exceeds the range for a valid time |
-833 |
335544913 |
datetime_range_exceeded |
value exceeds the range for valid timestamps |
-833 |
335544937 |
invalid_type_datetime_op |
Invalid data type in DATE/TIME/TIMESTAMP addition or subtraction in add_datettime() |
-833 |
335544938 |
onlycan_add_timetodate |
Only a TIME value can be added to a DATE value |
-833 |
335544939 |
onlycan_add_datetotime |
Only a DATE value can be added to a TIME value |
-833 |
335544940 |
onlycansub_tstampfromtstamp |
TIMESTAMP values can be subtracted only from another TIMESTAMP value |
-833 |
335544941 |
onlyoneop_mustbe_tstamp |
Only one operand can be of type TIMESTAMP |
-833 |
335544942 |
invalid_extractpart_time |
Only HOUR, MINUTE, SECOND and MILLISECOND can be extracted from TIME values |
-833 |
335544943 |
invalid_extractpart_date |
HOUR, MINUTE, SECOND and MILLISECOND cannot be extracted from DATE values |
-833 |
335544944 |
invalidarg_extract |
Invalid argument for EXTRACT() not being of DATE/TIME/TIMESTAMP type |
-833 |
335544945 |
sysf_argmustbe_exact |
Arguments for @1 must be integral types or NUMERIC/DECIMAL without scale |
-833 |
335544946 |
sysf_argmustbe_exact_or_fp |
First argument for @1 must be integral type or floating point type |
-833 |
335544947 |
sysf_argviolates_uuidtype |
Human readable UUID argument for @1 must be of string type |
-833 |
335544948 |
sysf_argviolates_uuidlen |
Human readable UUID argument for @2 must be of exact length @1 |
-833 |
335544949 |
sysf_argviolates_uuidfmt |
Human readable UUID argument for @3 must have "-" at position @2 instead of "@1" |
-833 |
335544950 |
sysf_argviolates_guidigits |
Human readable UUID argument for @3 must have hex digit at position @2 instead of "@1" |
-833 |
335544951 |
sysf_invalid_addpart_time |
Only HOUR, MINUTE, SECOND and MILLISECOND can be added to TIME values in @1 |
-833 |
335544952 |
sysf_invalid_add_datetime |
Invalid data type in addition of part to DATE/TIME/TIMESTAMP in @1 |
-833 |
335544953 |
sysf_invalid_addpart_dtime |
Invalid part @1 to be added to a DATE/TIME/TIMESTAMP value in @2 |
-833 |
335544954 |
sysf_invalid_add_dtime_rc |
Expected DATE/TIME/TIMESTAMP type in evlDateAdd() result |
-833 |
335544955 |
sysf_invalid_diff_dtime |
Expected DATE/TIME/TIMESTAMP type as first and second argument to @1 |
-833 |
335544956 |
sysf_invalid_timediff |
The result of TIME-<value> in @1 cannot be expressed in YEAR, MONTH, DAY or WEEK |
-833 |
335544957 |
sysf_invalid_tstamptimediff |
The result of TIME-TIMESTAMP or TIMESTAMP-TIME in @1 cannot be expressed in HOUR, MINUTE, SECOND or MILLISECOND |
-833 |
335544958 |
sysf_invalid_datetimediff |
The result of DATE-TIME or TIME-DATE in @1 cannot be expressed in HOUR, MINUTE, SECOND and MILLISECOND |
-833 |
335544959 |
sysf_invalid_diffpart |
Invalid part @1 to express the difference between two DATE/TIME/TIMESTAMP values in @2 |
-833 |
335544960 |
sysf_argmustbe_positive |
Argument for @1 must be positive |
-833 |
335544961 |
sysf_basemustbe_positive |
Base for @1 must be positive |
-833 |
335544962 |
sysf_argnmustbe_nonneg |
Argument #@1 for @2 must be zero or positive |
-833 |
335544963 |
sysf_argnmustbe_positive |
Argument #@1 for @2 must be positive |
-833 |
335544964 |
sysf_invalid_zeropowneg |
Base for @1 cannot be zero if exponent is negative |
-833 |
335544965 |
sysf_invalid_negpowfp |
Base for @1 cannot be negative if exponent is not an integral value |
-833 |
335544966 |
sysf_invalid_scale |
The numeric scale must be between -128 and 127 in @1 |
-833 |
335544967 |
sysf_argmustbe_nonneg |
Argument for @1 must be zero or positive |
-833 |
335544968 |
sysf_binuuid_mustbe_str |
Binary UUID argument for @1 must be of string type |
-833 |
335544969 |
sysf_binuuid_wrongsize |
Binary UUID argument for @2 must use @1 bytes |
-833 |
335544976 |
sysf_argmustbe_nonzero |
Argument for @1 must be different than zero |
-833 |
335544977 |
sysf_argmustbe_range_inc1_1 |
Argument for @1 must be in the range [-1, 1] |
-833 |
335544978 |
sysf_argmustbe_gteq_one |
Argument for @1 must be greater or equal than one |
-833 |
335544979 |
sysf_argmustbe_range_exc1_1 |
Argument for @1 must be in the range ]-1, 1[ |
-833 |
335544981 |
sysf_fp_overflow |
Floating point overflow in built-in function @1 |
-833 |
335545009 |
sysf_invalid_trig_namespace |
Invalid usage of context namespace DDL_TRIGGER |
-833 |
335545024 |
sysf_argscant_both_be_zero |
Arguments for @1 cannot both be zero |
-833 |
335545046 |
max_args_exceeded |
Maximum (@1) number of arguments exceeded for function @2 |
-833 |
335545120 |
window_frame_value_invalid |
Invalid PRECEDING or FOLLOWING offset in window function: cannot be negative |
-833 |
335545121 |
dsql_window_not_found |
Window @1 not found |
-833 |
335545122 |
dsql_window_cant_overr_part |
Cannot use PARTITION BY clause while overriding the window @1 |
-833 |
335545123 |
dsql_window_cant_overr_order |
Cannot use ORDER BY clause while overriding the window @1 which already has an ORDER BY clause |
-833 |
335545124 |
dsql_window_cant_overr_frame |
Cannot override the window @1 because it has a frame clause. Tip: it can be used without parenthesis in OVER |
-833 |
335545125 |
dsql_window_duplicate |
Duplicate window definition for @1 |
-833 |
335545156 |
sysf_invalid_first_last_part |
Invalid part @1 to calculate the @1 of a DATE/TIMESTAMP |
-833 |
335545157 |
sysf_invalid_date_timestamp |
Expected DATE/TIMESTAMP value in @1 |
-833 |
336397240 |
dsql_eval_unknode |
Unknown node type @1 in dsql/GEN_expr |
-833 |
336397241 |
dsql_agg_wrongarg |
Argument for @1 in dialect 1 must be string or numeric |
-833 |
336397242 |
dsql_agg2_wrongarg |
Argument for @1 in dialect 3 must be numeric |
-833 |
336397243 |
dsql_nodateortime_pm_string |
Strings cannot be added to or subtracted from DATE or TIME types |
-833 |
336397244 |
dsql_invalid_datetime_subtract |
Invalid data type for subtraction involving DATE, TIME or TIMESTAMP types |
-833 |
336397245 |
dsql_invalid_dateortime_add |
Adding two DATE values or two TIME values is not allowed |
-833 |
336397246 |
dsql_invalid_type_minus_date |
DATE value cannot be subtracted from the provided data type |
-833 |
336397247 |
dsql_nostring_addsub_dial3 |
Strings cannot be added or subtracted in dialect 3 |
-833 |
336397248 |
dsql_invalid_type_addsub_dial3 |
Invalid data type for addition or subtraction in dialect 3 |
-833 |
336397249 |
dsql_invalid_type_multip_dial1 |
Invalid data type for multiplication in dialect 1 |
-833 |
336397250 |
dsql_nostring_multip_dial3 |
Strings cannot be multiplied in dialect 3 |
-833 |
336397251 |
dsql_invalid_type_multip_dial3 |
Invalid data type for multiplication in dialect 3 |
-833 |
336397252 |
dsql_mustuse_numeric_div_dial1 |
Division in dialect 1 must be between numeric data types |
-833 |
336397253 |
dsql_nostring_div_dial3 |
Strings cannot be divided in dialect 3 |
-833 |
336397254 |
dsql_invalid_type_div_dial3 |
Invalid data type for division in dialect 3 |
-833 |
336397255 |
dsql_nostring_neg_dial3 |
Strings cannot be negated (applied the minus operator) in dialect 3 |
-833 |
336397256 |
dsql_invalid_type_neg |
Invalid data type for negation (minus operator) |
-834 |
335544508 |
range_not_found |
refresh range number @1 not found |
-835 |
335544649 |
bad_checksum |
bad checksum |
-836 |
335544517 |
except |
exception @1 |
-836 |
335544848 |
except2 |
exception @1 |
-836 |
335545016 |
formatted_exception |
@1 |
-837 |
335544518 |
cache_restart |
restart shared cache manager |
-838 |
335544560 |
shutwarn |
database @1 shutdown in @2 seconds |
-839 |
335544686 |
jrn_format_err |
journal file wrong format |
-840 |
335544687 |
jrn_file_full |
intermediate journal file full |
-841 |
335544677 |
version_err |
too many versions |
-842 |
335544697 |
precision_err |
Precision must be from 1 to 18 |
-842 |
335544698 |
scale_nogt |
Scale must be between zero and precision |
-842 |
335544699 |
expec_short |
Short integer expected |
-842 |
335544700 |
expec_long |
Long integer expected |
-842 |
335544701 |
expec_ushort |
Unsigned short integer expected |
-842 |
335544712 |
expec_positive |
Positive value expected |
-842 |
335545138 |
decprecision_err |
DecFloat precision must be 16 or 34 |
-842 |
335545158 |
precision_err2 |
Precision must be from @1 to @2 |
-901 |
335544322 |
bad_dbkey |
invalid database key |
-901 |
335544326 |
bad_dpb_form |
unrecognized database parameter block |
-901 |
335544327 |
bad_req_handle |
invalid request handle |
-901 |
335544328 |
bad_segstr_handle |
invalid BLOB handle |
-901 |
335544329 |
bad_segstr_id |
invalid BLOB ID |
-901 |
335544330 |
bad_tpb_content |
invalid parameter in transaction parameter block |
-901 |
335544331 |
bad_tpb_form |
invalid format for transaction parameter block |
-901 |
335544332 |
bad_trans_handle |
invalid transaction handle (expecting explicit transaction start) |
-901 |
335544337 |
excess_trans |
attempt to start more than @1 transactions |
-901 |
335544339 |
infinap |
information type inappropriate for object specified |
-901 |
335544340 |
infona |
no information of this type available for object specified |
-901 |
335544341 |
infunk |
unknown information item |
-901 |
335544342 |
integ_fail |
action cancelled by trigger (@1) to preserve data integrity |
-901 |
335544345 |
lock_conflict |
lock conflict on no wait transaction |
-901 |
335544350 |
no_finish |
program attempted to exit without finishing database |
-901 |
335544353 |
no_recon |
transaction is not in limbo |
-901 |
335544355 |
no_segstr_close |
BLOB was not closed |
-901 |
335544357 |
open_trans |
cannot disconnect database with open transactions (@1 active) |
-901 |
335544358 |
port_len |
message length error (encountered @1, expected @2) |
-901 |
335544363 |
req_no_trans |
no transaction for request |
-901 |
335544364 |
req_sync |
request synchronization error |
-901 |
335544365 |
req_wrong_db |
request referenced an unavailable database |
-901 |
335544369 |
segstr_no_read |
attempted read of a new, open BLOB |
-901 |
335544370 |
segstr_no_trans |
attempted action on BLOB outside transaction |
-901 |
335544372 |
segstr_wrong_db |
attempted reference to BLOB in unavailable database |
-901 |
335544376 |
unres_rel |
table @1 was omitted from the transaction reserving list |
-901 |
335544377 |
uns_ext |
request includes a DSRI extension not supported in this implementation |
-901 |
335544378 |
wish_list |
feature is not supported |
-901 |
335544382 |
random |
@1 |
-901 |
335544383 |
fatal_conflict |
unrecoverable conflict with limbo transaction @1 |
-901 |
335544392 |
bdbincon |
internal error |
-901 |
335544407 |
dbbnotzer |
database handle not zero |
-901 |
335544408 |
tranotzer |
transaction handle not zero |
-901 |
335544418 |
trainlim |
transaction in limbo |
-901 |
335544419 |
notinlim |
transaction not in limbo |
-901 |
335544420 |
traoutsta |
transaction outstanding |
-901 |
335544428 |
badmsgnum |
undefined message number |
-901 |
335544431 |
blocking_signal |
blocking signal has been received |
-901 |
335544442 |
noargacc_read |
database system cannot read argument @1 |
-901 |
335544443 |
noargacc_write |
database system cannot write argument @1 |
-901 |
335544450 |
misc_interpreted |
@1 |
-901 |
335544468 |
tra_state |
transaction @1 is @2 |
-901 |
335544485 |
bad_stmt_handle |
invalid statement handle |
-901 |
335544510 |
lock_timeout |
lock time-out on wait transaction |
-901 |
335544559 |
bad_svc_handle |
invalid service handle |
-901 |
335544561 |
wrospbver |
wrong version of service parameter block |
-901 |
335544562 |
bad_spb_form |
unrecognized service parameter block |
-901 |
335544563 |
svcnotdef |
service @1 is not defined |
-901 |
335544609 |
index_name |
INDEX @1 |
-901 |
335544610 |
exception_name |
EXCEPTION @1 |
-901 |
335544611 |
field_name |
COLUMN @1 |
-901 |
335544613 |
union_err |
union not supported |
-901 |
335544614 |
dsql_construct_err |
Unsupported DSQL construct |
-901 |
335544623 |
dsql_domain_err |
Illegal use of keyword VALUE |
-901 |
335544626 |
table_name |
TABLE @1 |
-901 |
335544627 |
proc_name |
PROCEDURE @1 |
-901 |
335544641 |
dsql_domain_not_found |
Specified domain or source column @1 does not exist |
-901 |
335544656 |
dsql_var_conflict |
variable @1 conflicts with parameter in same procedure |
-901 |
335544666 |
srvr_version_too_old |
server version too old to support all CREATE DATABASE options |
-901 |
335544673 |
no_delete |
cannot delete |
-901 |
335544675 |
sort_err |
sort error |
-901 |
335544703 |
svcnoexe |
service @1 does not have an associated executable |
-901 |
335544704 |
net_lookup_err |
Failed to locate host machine. |
-901 |
335544705 |
service_unknown |
Undefined service @1/@2. |
-901 |
335544706 |
host_unknown |
The specified name was not found in the hosts file or Domain Name Services. |
-901 |
335544711 |
unprepared_stmt |
Attempt to execute an unprepared dynamic SQL statement. |
-901 |
335544716 |
svc_in_use |
Service is currently busy: @1 |
-901 |
335544719 |
net_init_error |
Error initializing the network software. |
-901 |
335544720 |
loadlib_failure |
Unable to load required library @1. |
-901 |
335544731 |
tra_must_sweep |
|
-901 |
335544740 |
udf_exception |
A fatal exception occurred during the execution of a user defined function. |
-901 |
335544741 |
lost_db_connection |
connection lost to database |
-901 |
335544742 |
no_write_user_priv |
User cannot write to RDB$USER_PRIVILEGES |
-901 |
335544767 |
blob_filter_exception |
A fatal exception occurred during the execution of a blob filter. |
-901 |
335544768 |
exception_access_violation |
Access violation. The code attempted to access a virtual address without privilege to do so. |
-901 |
335544769 |
exception_datatype_missalignment |
Datatype misalignment. The attempted to read or write a value that was not stored on a memory boundary. |
-901 |
335544770 |
exception_array_bounds_exceeded |
Array bounds exceeded. The code attempted to access an array element that is out of bounds. |
-901 |
335544771 |
exception_float_denormal_operand |
Float denormal operand. One of the floating-point operands is too small to represent a standard float value. |
-901 |
335544772 |
exception_float_divide_by_zero |
Floating-point divide by zero. The code attempted to divide a floating-point value by zero. |
-901 |
335544773 |
exception_float_inexact_result |
Floating-point inexact result. The result of a floating-point operation cannot be represented as a decimal fraction. |
-901 |
335544774 |
exception_float_invalid_operand |
Floating-point invalid operand. An indeterminant error occurred during a floating-point operation. |
-901 |
335544775 |
exception_float_overflow |
Floating-point overflow. The exponent of a floating-point operation is greater than the magnitude allowed. |
-901 |
335544776 |
exception_float_stack_check |
Floating-point stack check. The stack overflowed or underflowed as the result of a floating-point operation. |
-901 |
335544777 |
exception_float_underflow |
Floating-point underflow. The exponent of a floating-point operation is less than the magnitude allowed. |
-901 |
335544778 |
exception_integer_divide_by_zero |
Integer divide by zero. The code attempted to divide an integer value by an integer divisor of zero. |
-901 |
335544779 |
exception_integer_overflow |
Integer overflow. The result of an integer operation caused the most significant bit of the result to carry. |
-901 |
335544780 |
exception_unknown |
An exception occurred that does not have a description. Exception number @1. |
-901 |
335544781 |
exception_stack_overflow |
Stack overflow. The resource requirements of the runtime stack have exceeded the memory available to it. |
-901 |
335544782 |
exception_sigsegv |
Segmentation Fault. The code attempted to access memory without privileges. |
-901 |
335544783 |
exception_sigill |
Illegal Instruction. The Code attempted to perform an illegal operation. |
-901 |
335544784 |
exception_sigbus |
Bus Error. The Code caused a system bus error. |
-901 |
335544785 |
exception_sigfpe |
Floating Point Error. The Code caused an Arithmetic Exception or a floating point exception. |
-901 |
335544786 |
ext_file_delete |
Cannot delete rows from external files. |
-901 |
335544787 |
ext_file_modify |
Cannot update rows in external files. |
-901 |
335544788 |
adm_task_denied |
Unable to perform operation |
-901 |
335544794 |
cancelled |
operation was cancelled |
-901 |
335544797 |
svcnouser |
user name and password are required while attaching to the services manager |
-901 |
335544801 |
datype_notsup |
data type not supported for arithmetic |
-901 |
335544803 |
dialect_not_changed |
Database dialect not changed. |
-901 |
335544804 |
database_create_failed |
Unable to create database @1 |
-901 |
335544805 |
inv_dialect_specified |
Database dialect @1 is not a valid dialect. |
-901 |
335544806 |
valid_db_dialects |
Valid database dialects are @1. |
-901 |
335544811 |
inv_client_dialect_specified |
passed client dialect @1 is not a valid dialect. |
-901 |
335544812 |
valid_client_dialects |
Valid client dialects are @1. |
-901 |
335544814 |
service_not_supported |
Services functionality will be supported in a later version of the product |
-901 |
335544820 |
invalid_savepoint |
Unable to find savepoint with name @1 in transaction context |
-901 |
335544835 |
bad_shutdown_mode |
Target shutdown mode is invalid for database "@1" |
-901 |
335544840 |
no_update |
cannot update |
-901 |
335544842 |
stack_trace |
@1 |
-901 |
335544843 |
ctx_var_not_found |
Context variable '@1' is not found in namespace '@2' |
-901 |
335544844 |
ctx_namespace_invalid |
Invalid namespace name '@1' passed to @2 |
-901 |
335544845 |
ctx_too_big |
Too many context variables |
-901 |
335544846 |
ctx_bad_argument |
Invalid argument passed to @1 |
-901 |
335544847 |
identifier_too_long |
BLR syntax error. Identifier @1… is too long |
-901 |
335544859 |
invalid_time_precision |
Time precision exceeds allowed range (0-@1) |
-901 |
335544866 |
met_wrong_gtt_scope |
@1 cannot depend on @2 |
-901 |
335544868 |
illegal_prc_type |
Procedure @1 is not selectable (it does not contain a SUSPEND statement) |
-901 |
335544869 |
invalid_sort_datatype |
Datatype @1 is not supported for sorting operation |
-901 |
335544870 |
collation_name |
COLLATION @1 |
-901 |
335544871 |
domain_name |
DOMAIN @1 |
-901 |
335544874 |
max_db_per_trans_allowed |
A multi database transaction cannot span more than @1 databases |
-901 |
335544876 |
bad_proc_BLR |
Error while parsing procedure @1’s BLR |
-901 |
335544877 |
key_too_big |
index key too big |
-901 |
335544885 |
bad_teb_form |
Invalid TEB format |
-901 |
335544886 |
tpb_multiple_txn_isolation |
Found more than one transaction isolation in TPB |
-901 |
335544887 |
tpb_reserv_before_table |
Table reservation lock type @1 requires table name before in TPB |
-901 |
335544888 |
tpb_multiple_spec |
Found more than one @1 specification in TPB |
-901 |
335544889 |
tpb_option_without_rc |
Option @1 requires READ COMMITTED isolation in TPB |
-901 |
335544890 |
tpb_conflicting_options |
Option @1 is not valid if @2 was used previously in TPB |
-901 |
335544891 |
tpb_reserv_missing_tlen |
Table name length missing after table reservation @1 in TPB |
-901 |
335544892 |
tpb_reserv_long_tlen |
Table name length @1 is too long after table reservation @2 in TPB |
-901 |
335544893 |
tpb_reserv_missing_tname |
Table name length @1 without table name after table reservation @2 in TPB |
-901 |
335544894 |
tpb_reserv_corrup_tlen |
Table name length @1 goes beyond the remaining TPB size after table reservation @2 |
-901 |
335544895 |
tpb_reserv_null_tlen |
Table name length is zero after table reservation @1 in TPB |
-901 |
335544896 |
tpb_reserv_relnotfound |
Table or view @1 not defined in system tables after table reservation @2 in TPB |
-901 |
335544897 |
tpb_reserv_baserelnotfound |
Base table or view @1 for view @2 not defined in system tables after table reservation @3 in TPB |
-901 |
335544898 |
tpb_missing_len |
Option length missing after option @1 in TPB |
-901 |
335544899 |
tpb_missing_value |
Option length @1 without value after option @2 in TPB |
-901 |
335544900 |
tpb_corrupt_len |
Option length @1 goes beyond the remaining TPB size after option @2 |
-901 |
335544901 |
tpb_null_len |
Option length is zero after table reservation @1 in TPB |
-901 |
335544902 |
tpb_overflow_len |
Option length @1 exceeds the range for option @2 in TPB |
-901 |
335544903 |
tpb_invalid_value |
Option value @1 is invalid for the option @2 in TPB |
-901 |
335544904 |
tpb_reserv_stronger_wng |
Preserving previous table reservation @1 for table @2, stronger than new @3 in TPB |
-901 |
335544905 |
tpb_reserv_stronger |
Table reservation @1 for table @2 already specified and is stronger than new @3 in TPB |
-901 |
335544906 |
tpb_reserv_max_recursion |
Table reservation reached maximum recursion of @1 when expanding views in TPB |
-901 |
335544907 |
tpb_reserv_virtualtbl |
Table reservation in TPB cannot be applied to @1 because it’s a virtual table |
-901 |
335544908 |
tpb_reserv_systbl |
Table reservation in TPB cannot be applied to @1 because it’s a system table |
-901 |
335544909 |
tpb_reserv_temptbl |
Table reservation @1 or @2 in TPB cannot be applied to @3 because it’s a temporary table |
-901 |
335544910 |
tpb_readtxn_after_writelock |
Cannot set the transaction in read only mode after a table reservation isc_tpb_lock_write in TPB |
-901 |
335544911 |
tpb_writelock_after_readtxn |
Cannot take a table reservation isc_tpb_lock_write in TPB because the transaction is in read only mode |
-901 |
335544917 |
shutdown_timeout |
Firebird shutdown is still in progress after the specified timeout |
-901 |
335544918 |
att_handle_busy |
Attachment handle is busy |
-901 |
335544919 |
bad_udf_freeit |
Bad written UDF detected: pointer returned in FREE_IT function was not allocated by ib_util_malloc |
-901 |
335544920 |
eds_provider_not_found |
External Data Source provider '@1' not found |
-901 |
335544921 |
eds_connection |
Execute statement error at @1 : @2Data source : @3 |
-901 |
335544922 |
eds_preprocess |
Execute statement preprocess SQL error |
-901 |
335544923 |
eds_stmt_expected |
Statement expected |
-901 |
335544924 |
eds_prm_name_expected |
Parameter name expected |
-901 |
335544925 |
eds_unclosed_comment |
Unclosed comment found near '@1' |
-901 |
335544926 |
eds_statement |
Execute statement error at @1 : @2Statement : @3 Data source : @4 |
-901 |
335544927 |
eds_input_prm_mismatch |
Input parameters mismatch |
-901 |
335544928 |
eds_output_prm_mismatch |
Output parameters mismatch |
-901 |
335544929 |
eds_input_prm_not_set |
Input parameter '@1' have no value set |
-901 |
335544933 |
nothing_to_cancel |
nothing to cancel |
-901 |
335544934 |
ibutil_not_loaded |
ib_util library has not been loaded to deallocate memory returned by FREE_IT function |
-901 |
335544973 |
bad_epb_form |
Unrecognized events block |
-901 |
335544982 |
udf_fp_overflow |
Floating point overflow in result from UDF @1 |
-901 |
335544983 |
udf_fp_nan |
Invalid floating point value returned by UDF @1 |
-901 |
335544985 |
out_of_temp_space |
No free space found in temporary directories |
-901 |
335544986 |
eds_expl_tran_ctrl |
Explicit transaction control is not allowed |
-901 |
335544988 |
package_name |
PACKAGE @1 |
-901 |
335544989 |
cannot_make_not_null |
Cannot make field @1 of table @2 NOT NULL because there are NULLs present |
-901 |
335544990 |
feature_removed |
Feature @1 is not supported anymore |
-901 |
335544991 |
view_name |
VIEW @1 |
-901 |
335544993 |
invalid_fetch_option |
Fetch option @1 is invalid for a non-scrollable cursor |
-901 |
335544994 |
bad_fun_BLR |
Error while parsing function @1’s BLR |
-901 |
335544995 |
func_pack_not_implemented |
Cannot execute function @1 of the unimplemented package @2 |
-901 |
335544996 |
proc_pack_not_implemented |
Cannot execute procedure @1 of the unimplemented package @2 |
-901 |
335544997 |
eem_func_not_returned |
External function @1 not returned by the external engine plugin @2 |
-901 |
335544998 |
eem_proc_not_returned |
External procedure @1 not returned by the external engine plugin @2 |
-901 |
335544999 |
eem_trig_not_returned |
External trigger @1 not returned by the external engine plugin @2 |
-901 |
335545000 |
eem_bad_plugin_ver |
Incompatible plugin version @1 for external engine @2 |
-901 |
335545001 |
eem_engine_notfound |
External engine @1 not found |
-901 |
335545004 |
pman_cannot_load_plugin |
Error loading plugin @1 |
-901 |
335545005 |
pman_module_notfound |
Loadable module @1 not found |
-901 |
335545006 |
pman_entrypoint_notfound |
Standard plugin entrypoint does not exist in module @1 |
-901 |
335545007 |
pman_module_bad |
Module @1 exists but can not be loaded |
-901 |
335545008 |
pman_plugin_notfound |
Module @1 does not contain plugin @2 type @3 |
-901 |
335545010 |
unexpected_null |
Value is NULL but isNull parameter was not informed |
-901 |
335545011 |
type_notcompat_blob |
Type @1 is incompatible with BLOB |
-901 |
335545012 |
invalid_date_val |
Invalid date |
-901 |
335545013 |
invalid_time_val |
Invalid time |
-901 |
335545014 |
invalid_timestamp_val |
Invalid timestamp |
-901 |
335545015 |
invalid_index_val |
Invalid index @1 in function @2 |
-901 |
335545018 |
private_function |
Function @1 is private to package @2 |
-901 |
335545019 |
private_procedure |
Procedure @1 is private to package @2 |
-901 |
335545021 |
bad_events_handle |
invalid events id (handle) |
-901 |
335545025 |
spb_no_id |
missing service ID in spb |
-901 |
335545026 |
ee_blr_mismatch_null |
External BLR message mismatch: invalid null descriptor at field @1 |
-901 |
335545027 |
ee_blr_mismatch_length |
External BLR message mismatch: length = @1, expected @2 |
-901 |
335545031 |
libtommath_generic |
Libtommath error code @1 in function @2 |
-901 |
335545041 |
cp_process_active |
Crypt failed - already crypting database |
-901 |
335545042 |
cp_already_crypted |
Crypt failed - database is already in requested state |
-901 |
335545047 |
ee_blr_mismatch_names_count |
External BLR message mismatch: names count = @1, blr count = @2 |
-901 |
335545048 |
ee_blr_mismatch_name_not_found |
External BLR message mismatch: name @1 not found |
-901 |
335545049 |
bad_result_set |
Invalid resultset interface |
-901 |
335545059 |
badvarnum |
undefined variable number |
-901 |
335545071 |
info_unprepared_stmt |
Attempt to get information about an unprepared dynamic SQL statement. |
-901 |
335545072 |
idx_key_value |
Problematic key value is @1 |
-901 |
335545073 |
forupdate_virtualtbl |
Cannot select virtual table @1 for update WITH LOCK |
-901 |
335545074 |
forupdate_systbl |
Cannot select system table @1 for update WITH LOCK |
-901 |
335545075 |
forupdate_temptbl |
Cannot select temporary table @1 for update WITH LOCK |
-901 |
335545076 |
cant_modify_sysobj |
System @1 @2 cannot be modified |
-901 |
335545077 |
server_misconfigured |
Server misconfigured - contact administrator please |
-901 |
335545078 |
alter_role |
Deprecated backward compatibility ALTER ROLE … SET/DROP AUTO ADMIN mapping may be used only for RDB$ADMIN role |
-901 |
335545079 |
map_already_exists |
Mapping @1 already exists |
-901 |
335545080 |
map_not_exists |
Mapping @1 does not exist |
-901 |
335545081 |
map_load |
@1 failed when loading mapping cache |
-901 |
335545082 |
map_aster |
Invalid name <*> in authentication block |
-901 |
335545083 |
map_multi |
Multiple maps found for @1 |
-901 |
335545084 |
map_undefined |
Undefined mapping result - more than one different results found |
-901 |
335545088 |
map_nodb |
Global mapping is not available when database @1 is not present |
-901 |
335545089 |
map_notable |
Global mapping is not available when table RDB$MAP is not present in database @1 |
-901 |
335545090 |
miss_trusted_role |
Your attachment has no trusted role |
-901 |
335545091 |
set_invalid_role |
Role @1 is invalid or unavailable |
-901 |
335545093 |
dup_attribute |
Duplicated user attribute @1 |
-901 |
335545094 |
dyn_no_priv |
There is no privilege for this operation |
-901 |
335545095 |
dsql_cant_grant_option |
Using GRANT OPTION on @1 not allowed |
-901 |
335545097 |
crdb_load |
@1 failed when working with CREATE DATABASE grants |
-901 |
335545098 |
crdb_nodb |
CREATE DATABASE grants check is not possible when database @1 is not present |
-901 |
335545099 |
crdb_notable |
CREATE DATABASE grants check is not possible when table RDB$DB_CREATORS is not present in database @1 |
-901 |
335545102 |
savepoint_backout_err |
Error during savepoint backout - transaction invalidated |
-901 |
335545105 |
map_down |
Some database(s) were shutdown when trying to read mapping data |
-901 |
335545109 |
encrypt_error |
Page requires encryption but crypt plugin is missing |
-901 |
335545111 |
wrong_prvlg |
System privilege @1 does not exist |
-901 |
335545115 |
no_cursor |
Cannot open cursor for non-SELECT statement |
-901 |
335545127 |
cfg_stmt_timeout |
Config level timeout expired. |
-901 |
335545128 |
att_stmt_timeout |
Attachment level timeout expired. |
-901 |
335545129 |
req_stmt_timeout |
Statement level timeout expired. |
-901 |
335545139 |
decfloat_divide_by_zero |
Decimal float divide by zero. The code attempted to divide a DECFLOAT value by zero. |
-901 |
335545140 |
decfloat_inexact_result |
Decimal float inexact result. The result of an operation cannot be represented as a decimal fraction. |
-901 |
335545141 |
decfloat_invalid_operation |
Decimal float invalid operation. An indeterminant error occurred during an operation. |
-901 |
335545142 |
decfloat_overflow |
Decimal float overflow. The exponent of a result is greater than the magnitude allowed. |
-901 |
335545143 |
decfloat_underflow |
Decimal float underflow. The exponent of a result is less than the magnitude allowed. |
-901 |
335545144 |
subfunc_notdef |
Sub-function @1 has not been defined |
-901 |
335545145 |
subproc_notdef |
Sub-procedure @1 has not been defined |
-901 |
335545146 |
subfunc_signat |
Sub-function @1 has a signature mismatch with its forward declaration |
-901 |
335545147 |
subproc_signat |
Sub-procedure @1 has a signature mismatch with its forward declaration |
-901 |
335545148 |
subfunc_defvaldecl |
Default values for parameters are not allowed in definition of the previously declared sub-function @1 |
-901 |
335545149 |
subproc_defvaldecl |
Default values for parameters are not allowed in definition of the previously declared sub-procedure @1 |
-901 |
335545150 |
subfunc_not_impl |
Sub-function @1 was declared but not implemented |
-901 |
335545151 |
subproc_not_impl |
Sub-procedure @1 was declared but not implemented |
-901 |
335545152 |
sysf_invalid_hash_algorithm |
Invalid HASH algorithm @1 |
-901 |
335545153 |
expression_eval_index |
Expression evaluation error for index "@1" on table "@2" |
-901 |
335545154 |
invalid_decfloat_trap |
Invalid decfloat trap state @1 |
-901 |
335545155 |
invalid_decfloat_round |
Invalid decfloat rounding mode @1 |
-901 |
335545159 |
bad_batch_handle |
invalid batch handle |
-901 |
335545160 |
intl_char |
Bad international character in tag @1 |
-901 |
335545161 |
null_block |
Null data in parameters block with non-zero length |
-901 |
335545162 |
mixed_info |
Items working with running service and getting generic server information should not be mixed in single info block |
-901 |
335545163 |
unknown_info |
Unknown information item, code @1 |
-901 |
335545164 |
bpb_version |
Wrong version of blob parameters block @1, should be @2 |
-901 |
335545165 |
user_manager |
User management plugin is missing or failed to load |
-901 |
335545168 |
metadata_name |
Name @1 not found in system MetadataBuilder |
-901 |
335545169 |
tokens_parse |
Parse to tokens error |
-901 |
335545171 |
batch_compl_range |
Message @1 is out of range, only @2 messages in batch |
-901 |
335545172 |
batch_compl_detail |
Detailed error info for message @1 is missing in batch |
-901 |
335545175 |
big_segment |
Segment size (@1) should not exceed 65535 (64K - 1) when using segmented blob |
-901 |
335545176 |
batch_policy |
Invalid blob policy in the batch for @1() call |
-901 |
335545177 |
batch_defbpb |
Can’t change default BPB after adding any data to batch |
-901 |
335545178 |
batch_align |
Unexpected info buffer structure querying for server batch parameters |
-901 |
335545179 |
multi_segment_dup |
Duplicated segment @1 in multisegment connect block parameter |
-901 |
335545181 |
message_format |
Error parsing message format |
-901 |
335545182 |
batch_param_version |
Wrong version of batch parameters block @1, should be @2 |
-901 |
335545183 |
batch_msg_long |
Message size (@1) in batch exceeds internal buffer size (@2) |
-901 |
335545184 |
batch_open |
Batch already opened for this statement |
-901 |
335545185 |
batch_type |
Invalid type of statement used in batch |
-901 |
335545186 |
batch_param |
Statement used in batch must have parameters |
-901 |
335545187 |
batch_blobs |
There are no blobs in associated with batch statement |
-901 |
335545188 |
batch_blob_append |
appendBlobData() is used to append data to last blob but no such blob was added to the batch |
-901 |
335545189 |
batch_stream_align |
Portions of data, passed as blob stream, should have size multiple to the alignment required for blobs |
-901 |
335545190 |
batch_rpt_blob |
Repeated blob id @1 in registerBlob() |
-901 |
335545191 |
batch_blob_buf |
Blob buffer format error |
-901 |
335545192 |
batch_small_data |
Unusable (too small) data remained in @1 buffer |
-901 |
335545193 |
batch_cont_bpb |
Blob continuation should not contain BPB |
-901 |
335545194 |
batch_big_bpb |
Size of BPB (@1) greater than remaining data (@2) |
-901 |
335545195 |
batch_big_segment |
Size of segment (@1) greater than current BLOB data (@2) |
-901 |
335545196 |
batch_big_seg2 |
Size of segment (@1) greater than available data (@2) |
-901 |
335545197 |
batch_blob_id |
Unknown blob ID @1 in the batch message |
-901 |
335545198 |
batch_too_big |
Internal buffer overflow - batch too big |
-901 |
335545199 |
num_literal |
Numeric literal too long |
-901 |
335545202 |
hdr_overflow |
Header page overflow - too many clumplets on it |
-901 |
335545203 |
vld_plugins |
No matching client/server authentication plugins configured for execute statement in embedded datasource |
-901 |
335545206 |
ses_reset_err |
Cannot reset user session |
-901 |
335545207 |
ses_reset_open_trans |
There are open transactions (@1 active) |
-901 |
335545208 |
ses_reset_warn |
Session was reset with warning(s) |
-901 |
335545209 |
ses_reset_tran_rollback |
Transaction is rolled back due to session reset, all changes are lost |
-901 |
335545210 |
plugin_name |
Plugin @1: |
-901 |
335545211 |
parameter_name |
PARAMETER @1 |
-901 |
335545212 |
file_starting_page_err |
Starting page number for file @1 must be @2 or greater |
-901 |
335545213 |
invalid_timezone_offset |
Invalid time zone offset: @1 - must use format +/-hours:minutes and be between -14:00 and +14:00 |
-901 |
335545214 |
invalid_timezone_region |
Invalid time zone region: @1 |
-901 |
335545215 |
invalid_timezone_id |
Invalid time zone ID: @1 |
-901 |
335545216 |
tom_decode64len |
Wrong base64 text length @1, should be multiple of 4 |
-901 |
335545217 |
tom_strblob |
Invalid first parameter datatype - need string or blob |
-901 |
335545218 |
tom_reg |
Error registering @1 - probably bad tomcrypt library |
-901 |
335545219 |
tom_algorithm |
Unknown crypt algorithm @1 in USING clause |
-901 |
335545220 |
tom_mode_miss |
Should specify mode parameter for symmetric cipher |
-901 |
335545221 |
tom_mode_bad |
Unknown symmetric crypt mode specified |
-901 |
335545222 |
tom_no_mode |
Mode parameter makes no sense for chosen cipher |
-901 |
335545223 |
tom_iv_miss |
Should specify initialization vector (IV) for chosen cipher and/or mode |
-901 |
335545224 |
tom_no_iv |
Initialization vector (IV) makes no sense for chosen cipher and/or mode |
-901 |
335545225 |
tom_ctrtype_bad |
Invalid counter endianess @1 |
-901 |
335545226 |
tom_no_ctrtype |
Counter endianess parameter is not used in mode @1 |
-901 |
335545227 |
tom_ctr_big |
Too big counter value @1, maximum @2 can be used |
-901 |
335545228 |
tom_no_ctr |
Counter length/value parameter is not used with @1 @2 |
-901 |
335545229 |
tom_iv_length |
Invalid initialization vector (IV) length @1, need @2 |
-901 |
335545230 |
tom_error |
TomCrypt library error: @1 |
-901 |
335545231 |
tom_yarrow_start |
Starting PRNG yarrow |
-901 |
335545232 |
tom_yarrow_setup |
Setting up PRNG yarrow |
-901 |
335545233 |
tom_init_mode |
Initializing @1 mode |
-901 |
335545234 |
tom_crypt_mode |
Encrypting in @1 mode |
-901 |
335545235 |
tom_decrypt_mode |
Decrypting in @1 mode |
-901 |
335545236 |
tom_init_cip |
Initializing cipher @1 |
-901 |
335545237 |
tom_crypt_cip |
Encrypting using cipher @1 |
-901 |
335545238 |
tom_decrypt_cip |
Decrypting using cipher @1 |
-901 |
335545239 |
tom_setup_cip |
Setting initialization vector (IV) for @1 |
-901 |
335545240 |
tom_setup_chacha |
Invalid initialization vector (IV) length @1, need 8 or 12 |
-901 |
335545241 |
tom_encode |
Encoding @1 |
-901 |
335545242 |
tom_decode |
Decoding @1 |
-901 |
335545243 |
tom_rsa_import |
Importing RSA key |
-901 |
335545244 |
tom_oaep |
Invalid OAEP packet |
-901 |
335545245 |
tom_hash_bad |
Unknown hash algorithm @1 |
-901 |
335545246 |
tom_rsa_make |
Making RSA key |
-901 |
335545247 |
tom_rsa_export |
Exporting @1 RSA key |
-901 |
335545248 |
tom_rsa_sign |
RSA-signing data |
-901 |
335545249 |
tom_rsa_verify |
Verifying RSA-signed data |
-901 |
335545250 |
tom_chacha_key |
Invalid key length @1, need 16 or 32 |
-901 |
335545251 |
bad_repl_handle |
invalid replicator handle |
-901 |
335545252 |
tra_snapshot_does_not_exist |
Transaction’s base snapshot number does not exist |
-901 |
335545253 |
eds_input_prm_not_used |
Input parameter '@1' is not used in SQL query text |
-901 |
335545255 |
invalid_time_zone_bind |
Invalid time zone bind mode @1 |
-901 |
335545256 |
invalid_decfloat_bind |
Invalid decfloat bind mode @1 |
-901 |
335545257 |
odd_hex_len |
Invalid hex text length @1, should be multiple of 2 |
-901 |
335545258 |
invalid_hex_digit |
Invalid hex digit @1 at position @2 |
-901 |
335545261 |
bind_convert |
Can not convert @1 to @2 |
-901 |
335545264 |
dyn_no_create_priv |
No permission for CREATE @1 operation |
-901 |
335545265 |
suspend_without_returns |
SUSPEND could not be used without RETURNS clause in PROCEDURE or EXECUTE BLOCK |
-901 |
335545274 |
tom_key_length |
Invalid key length @1, need >@2 |
-901 |
335545275 |
inf_invalid_args |
Invalid information arguments |
-901 |
335545276 |
sysf_invalid_null_empty |
Empty or NULL parameter @1 is not accepted |
-901 |
335545277 |
bad_loctab_num |
Undefined local table number @1 |
-901 |
335545278 |
quoted_str_bad |
Invalid text <@1> after quoted string |
-901 |
335545279 |
quoted_str_miss |
Missing terminating quote <@1> in the end of quoted string |
-901 |
335545285 |
ods_upgrade_err |
ODS upgrade failed while adding new system %s |
-901 |
336068645 |
dyn_filter_not_found |
BLOB Filter @1 not found |
-901 |
336068649 |
dyn_func_not_found |
Function @1 not found |
-901 |
336068656 |
dyn_index_not_found |
Index not found |
-901 |
336068662 |
dyn_view_not_found |
View @1 not found |
-901 |
336068697 |
dyn_domain_not_found |
Domain not found |
-901 |
336068717 |
dyn_cant_modify_auto_trig |
Triggers created automatically cannot be modified |
-901 |
336068740 |
dyn_dup_table |
Table @1 already exists |
-901 |
336068748 |
dyn_proc_not_found |
Procedure @1 not found |
-901 |
336068752 |
dyn_exception_not_found |
Exception not found |
-901 |
336068754 |
dyn_proc_param_not_found |
Parameter @1 in procedure @2 not found |
-901 |
336068755 |
dyn_trig_not_found |
Trigger @1 not found |
-901 |
336068759 |
dyn_charset_not_found |
Character set @1 not found |
-901 |
336068760 |
dyn_collation_not_found |
Collation @1 not found |
-901 |
336068763 |
dyn_role_not_found |
Role @1 not found |
-901 |
336068767 |
dyn_name_longer |
Name longer than database column size |
-901 |
336068784 |
dyn_column_does_not_exist |
column @1 does not exist in table/view @2 |
-901 |
336068796 |
dyn_role_does_not_exist |
SQL role @1 does not exist |
-901 |
336068797 |
dyn_no_grant_admin_opt |
user @1 has no grant admin option on SQL role @2 |
-901 |
336068798 |
dyn_user_not_role_member |
user @1 is not a member of SQL role @2 |
-901 |
336068799 |
dyn_delete_role_failed |
@1 is not the owner of SQL role @2 |
-901 |
336068800 |
dyn_grant_role_to_user |
@1 is a SQL role and not a user |
-901 |
336068801 |
dyn_inv_sql_role_name |
user name @1 could not be used for SQL role |
-901 |
336068802 |
dyn_dup_sql_role |
SQL role @1 already exists |
-901 |
336068803 |
dyn_kywd_spec_for_role |
keyword @1 can not be used as a SQL role name |
-901 |
336068804 |
dyn_roles_not_supported |
SQL roles are not supported in on older versions of the database. A backup and restore of the database is required. |
-901 |
336068820 |
dyn_zero_len_id |
Zero length identifiers are not allowed |
-901 |
336068822 |
dyn_gen_not_found |
Sequence @1 not found |
-901 |
336068840 |
dyn_wrong_gtt_scope |
@1 cannot reference @2 |
-901 |
336068843 |
dyn_coll_used_table |
Collation @1 is used in table @2 (field name @3) and cannot be dropped |
-901 |
336068844 |
dyn_coll_used_domain |
Collation @1 is used in domain @2 and cannot be dropped |
-901 |
336068846 |
dyn_cannot_del_def_coll |
Cannot delete default collation of CHARACTER SET @1 |
-901 |
336068849 |
dyn_table_not_found |
Table @1 not found |
-901 |
336068851 |
dyn_coll_used_procedure |
Collation @1 is used in procedure @2 (parameter name @3) and cannot be dropped |
-901 |
336068856 |
dyn_ods_not_supp_feature |
Feature '@1' is not supported in ODS @2.@3 |
-901 |
336068858 |
dyn_no_empty_pw |
Password should not be empty string |
-901 |
336068859 |
dyn_dup_index |
Index @1 already exists |
-901 |
336068864 |
dyn_package_not_found |
Package @1 not found |
-901 |
336068865 |
dyn_schema_not_found |
Schema @1 not found |
-901 |
336068871 |
dyn_funcnotdef_package |
Function @1 has not been defined on the package body @2 |
-901 |
336068872 |
dyn_procnotdef_package |
Procedure @1 has not been defined on the package body @2 |
-901 |
336068873 |
dyn_funcsignat_package |
Function @1 has a signature mismatch on package body @2 |
-901 |
336068874 |
dyn_procsignat_package |
Procedure @1 has a signature mismatch on package body @2 |
-901 |
336068875 |
dyn_defvaldecl_package_proc |
Default values for parameters are not allowed in the definition of a previously declared packaged procedure @1.@2 |
-901 |
336068877 |
dyn_package_body_exists |
Package body @1 already exists |
-901 |
336068879 |
dyn_newfc_oldsyntax |
Cannot alter new style function @1 with ALTER EXTERNAL FUNCTION. Use ALTER FUNCTION instead. |
-901 |
336068886 |
dyn_func_param_not_found |
Parameter @1 in function @2 not found |
-901 |
336068887 |
dyn_routine_param_not_found |
Parameter @1 of routine @2 not found |
-901 |
336068888 |
dyn_routine_param_ambiguous |
Parameter @1 of routine @2 is ambiguous (found in both procedures and functions). Use a specifier keyword. |
-901 |
336068889 |
dyn_coll_used_function |
Collation @1 is used in function @2 (parameter name @3) and cannot be dropped |
-901 |
336068890 |
dyn_domain_used_function |
Domain @1 is used in function @2 (parameter name @3) and cannot be dropped |
-901 |
336068891 |
dyn_alter_user_no_clause |
ALTER USER requires at least one clause to be specified |
-901 |
336068894 |
dyn_duplicate_package_item |
Duplicate @1 @2 |
-901 |
336068895 |
dyn_cant_modify_sysobj |
System @1 @2 cannot be modified |
-901 |
336068896 |
dyn_cant_use_zero_increment |
INCREMENT BY 0 is an illegal option for sequence @1 |
-901 |
336068897 |
dyn_cant_use_in_foreignkey |
Can’t use @1 in FOREIGN KEY constraint |
-901 |
336068898 |
dyn_defvaldecl_package_func |
Default values for parameters are not allowed in the definition of a previously declared packaged function @1.@2 |
-901 |
336068900 |
dyn_cyclic_role |
role @1 can not be granted to role @2 |
-901 |
336068904 |
dyn_cant_use_zero_inc_ident |
INCREMENT BY 0 is an illegal option for identity column @1 of table @2 |
-901 |
336068907 |
dyn_no_ddl_grant_opt_priv |
no @1 privilege with grant option on DDL @2 |
-901 |
336068908 |
dyn_no_grant_opt_priv |
no @1 privilege with grant option on object @2 |
-901 |
336068909 |
dyn_func_not_exist |
Function @1 does not exist |
-901 |
336068910 |
dyn_proc_not_exist |
Procedure @1 does not exist |
-901 |
336068911 |
dyn_pack_not_exist |
Package @1 does not exist |
-901 |
336068912 |
dyn_trig_not_exist |
Trigger @1 does not exist |
-901 |
336068913 |
dyn_view_not_exist |
View @1 does not exist |
-901 |
336068914 |
dyn_rel_not_exist |
Table @1 does not exist |
-901 |
336068915 |
dyn_exc_not_exist |
Exception @1 does not exist |
-901 |
336068916 |
dyn_gen_not_exist |
Generator/Sequence @1 does not exist |
-901 |
336068917 |
dyn_fld_not_exist |
Field @1 of table @2 does not exist |
-901 |
336397211 |
dsql_too_many_values |
Too many values (more than @1) in member list to match against |
-901 |
336397236 |
dsql_unsupp_feature_dialect |
feature is not supported in dialect @1 |
-901 |
336397239 |
dsql_unsupported_in_auto_trans |
@1 is not supported inside IN AUTONOMOUS TRANSACTION block |
-901 |
336397258 |
dsql_alter_charset_failed |
ALTER CHARACTER SET @1 failed |
-901 |
336397259 |
dsql_comment_on_failed |
COMMENT ON @1 failed |
-901 |
336397260 |
dsql_create_func_failed |
CREATE FUNCTION @1 failed |
-901 |
336397261 |
dsql_alter_func_failed |
ALTER FUNCTION @1 failed |
-901 |
336397262 |
dsql_create_alter_func_failed |
CREATE OR ALTER FUNCTION @1 failed |
-901 |
336397263 |
dsql_drop_func_failed |
DROP FUNCTION @1 failed |
-901 |
336397264 |
dsql_recreate_func_failed |
RECREATE FUNCTION @1 failed |
-901 |
336397265 |
dsql_create_proc_failed |
CREATE PROCEDURE @1 failed |
-901 |
336397266 |
dsql_alter_proc_failed |
ALTER PROCEDURE @1 failed |
-901 |
336397267 |
dsql_create_alter_proc_failed |
CREATE OR ALTER PROCEDURE @1 failed |
-901 |
336397268 |
dsql_drop_proc_failed |
DROP PROCEDURE @1 failed |
-901 |
336397269 |
dsql_recreate_proc_failed |
RECREATE PROCEDURE @1 failed |
-901 |
336397270 |
dsql_create_trigger_failed |
CREATE TRIGGER @1 failed |
-901 |
336397271 |
dsql_alter_trigger_failed |
ALTER TRIGGER @1 failed |
-901 |
336397272 |
dsql_create_alter_trigger_failed |
CREATE OR ALTER TRIGGER @1 failed |
-901 |
336397273 |
dsql_drop_trigger_failed |
DROP TRIGGER @1 failed |
-901 |
336397274 |
dsql_recreate_trigger_failed |
RECREATE TRIGGER @1 failed |
-901 |
336397275 |
dsql_create_collation_failed |
CREATE COLLATION @1 failed |
-901 |
336397276 |
dsql_drop_collation_failed |
DROP COLLATION @1 failed |
-901 |
336397277 |
dsql_create_domain_failed |
CREATE DOMAIN @1 failed |
-901 |
336397278 |
dsql_alter_domain_failed |
ALTER DOMAIN @1 failed |
-901 |
336397279 |
dsql_drop_domain_failed |
DROP DOMAIN @1 failed |
-901 |
336397280 |
dsql_create_except_failed |
CREATE EXCEPTION @1 failed |
-901 |
336397281 |
dsql_alter_except_failed |
ALTER EXCEPTION @1 failed |
-901 |
336397282 |
dsql_create_alter_except_failed |
CREATE OR ALTER EXCEPTION @1 failed |
-901 |
336397283 |
dsql_recreate_except_failed |
RECREATE EXCEPTION @1 failed |
-901 |
336397284 |
dsql_drop_except_failed |
DROP EXCEPTION @1 failed |
-901 |
336397285 |
dsql_create_sequence_failed |
CREATE SEQUENCE @1 failed |
-901 |
336397286 |
dsql_create_table_failed |
CREATE TABLE @1 failed |
-901 |
336397287 |
dsql_alter_table_failed |
ALTER TABLE @1 failed |
-901 |
336397288 |
dsql_drop_table_failed |
DROP TABLE @1 failed |
-901 |
336397289 |
dsql_recreate_table_failed |
RECREATE TABLE @1 failed |
-901 |
336397290 |
dsql_create_pack_failed |
CREATE PACKAGE @1 failed |
-901 |
336397291 |
dsql_alter_pack_failed |
ALTER PACKAGE @1 failed |
-901 |
336397292 |
dsql_create_alter_pack_failed |
CREATE OR ALTER PACKAGE @1 failed |
-901 |
336397293 |
dsql_drop_pack_failed |
DROP PACKAGE @1 failed |
-901 |
336397294 |
dsql_recreate_pack_failed |
RECREATE PACKAGE @1 failed |
-901 |
336397295 |
dsql_create_pack_body_failed |
CREATE PACKAGE BODY @1 failed |
-901 |
336397296 |
dsql_drop_pack_body_failed |
DROP PACKAGE BODY @1 failed |
-901 |
336397297 |
dsql_recreate_pack_body_failed |
RECREATE PACKAGE BODY @1 failed |
-901 |
336397298 |
dsql_create_view_failed |
CREATE VIEW @1 failed |
-901 |
336397299 |
dsql_alter_view_failed |
ALTER VIEW @1 failed |
-901 |
336397300 |
dsql_create_alter_view_failed |
CREATE OR ALTER VIEW @1 failed |
-901 |
336397301 |
dsql_recreate_view_failed |
RECREATE VIEW @1 failed |
-901 |
336397302 |
dsql_drop_view_failed |
DROP VIEW @1 failed |
-901 |
336397303 |
dsql_drop_sequence_failed |
DROP SEQUENCE @1 failed |
-901 |
336397304 |
dsql_recreate_sequence_failed |
RECREATE SEQUENCE @1 failed |
-901 |
336397305 |
dsql_drop_index_failed |
DROP INDEX @1 failed |
-901 |
336397306 |
dsql_drop_filter_failed |
DROP FILTER @1 failed |
-901 |
336397307 |
dsql_drop_shadow_failed |
DROP SHADOW @1 failed |
-901 |
336397308 |
dsql_drop_role_failed |
DROP ROLE @1 failed |
-901 |
336397309 |
dsql_drop_user_failed |
DROP USER @1 failed |
-901 |
336397310 |
dsql_create_role_failed |
CREATE ROLE @1 failed |
-901 |
336397311 |
dsql_alter_role_failed |
ALTER ROLE @1 failed |
-901 |
336397312 |
dsql_alter_index_failed |
ALTER INDEX @1 failed |
-901 |
336397313 |
dsql_alter_database_failed |
ALTER DATABASE failed |
-901 |
336397314 |
dsql_create_shadow_failed |
CREATE SHADOW @1 failed |
-901 |
336397315 |
dsql_create_filter_failed |
DECLARE FILTER @1 failed |
-901 |
336397316 |
dsql_create_index_failed |
CREATE INDEX @1 failed |
-901 |
336397317 |
dsql_create_user_failed |
CREATE USER @1 failed |
-901 |
336397318 |
dsql_alter_user_failed |
ALTER USER @1 failed |
-901 |
336397319 |
dsql_grant_failed |
GRANT failed |
-901 |
336397320 |
dsql_revoke_failed |
REVOKE failed |
-901 |
336397322 |
dsql_mapping_failed |
@2 MAPPING @1 failed |
-901 |
336397323 |
dsql_alter_sequence_failed |
ALTER SEQUENCE @1 failed |
-901 |
336397324 |
dsql_create_generator_failed |
CREATE GENERATOR @1 failed |
-901 |
336397325 |
dsql_set_generator_failed |
SET GENERATOR @1 failed |
-901 |
336397330 |
dsql_max_exception_arguments |
Number of arguments (@1) exceeds the maximum (@2) number of EXCEPTION USING arguments |
-901 |
336397331 |
dsql_string_byte_length |
String literal with @1 bytes exceeds the maximum length of @2 bytes |
-901 |
336397332 |
dsql_string_char_length |
String literal with @1 characters exceeds the maximum length of @2 characters for the @3 character set |
-901 |
336397333 |
dsql_max_nesting |
Too many BEGIN…END nesting. Maximum level is @1 |
-901 |
336397334 |
dsql_recreate_user_failed |
RECREATE USER @1 failed |
-902 |
335544333 |
bug_check |
internal Firebird consistency check (@1) |
-902 |
335544335 |
db_corrupt |
database file appears corrupt (@1) |
-902 |
335544344 |
io_error |
I/O error during "@1" operation for file "@2" |
-902 |
335544346 |
metadata_corrupt |
corrupt system table |
-902 |
335544373 |
sys_request |
operating system directive @1 failed |
-902 |
335544384 |
badblk |
internal error |
-902 |
335544385 |
invpoolcl |
internal error |
-902 |
335544387 |
relbadblk |
internal error |
-902 |
335544388 |
blktoobig |
block size exceeds implementation restriction |
-902 |
335544394 |
badodsver |
incompatible version of on-disk structure |
-902 |
335544397 |
dirtypage |
internal error |
-902 |
335544398 |
waifortra |
internal error |
-902 |
335544399 |
doubleloc |
internal error |
-902 |
335544400 |
nodnotfnd |
internal error |
-902 |
335544401 |
dupnodfnd |
internal error |
-902 |
335544402 |
locnotmar |
internal error |
-902 |
335544404 |
corrupt |
database corrupted |
-902 |
335544405 |
badpage |
checksum error on database page @1 |
-902 |
335544406 |
badindex |
index is broken |
-902 |
335544409 |
trareqmis |
transaction—request mismatch (synchronization error) |
-902 |
335544410 |
badhndcnt |
bad handle count |
-902 |
335544411 |
wrotpbver |
wrong version of transaction parameter block |
-902 |
335544412 |
wroblrver |
unsupported BLR version (expected @1, encountered @2) |
-902 |
335544413 |
wrodpbver |
wrong version of database parameter block |
-902 |
335544415 |
badrelation |
database corrupted |
-902 |
335544416 |
nodetach |
internal error |
-902 |
335544417 |
notremote |
internal error |
-902 |
335544422 |
dbfile |
internal error |
-902 |
335544423 |
orphan |
internal error |
-902 |
335544432 |
lockmanerr |
lock manager error |
-902 |
335544436 |
sqlerr |
SQL error code = @1 |
-902 |
335544448 |
bad_sec_info |
|
-902 |
335544449 |
invalid_sec_info |
|
-902 |
335544470 |
buf_invalid |
cache buffer for page @1 invalid |
-902 |
335544471 |
indexnotdefined |
there is no index in table @1 with id @2 |
-902 |
335544472 |
login |
Your user name and password are not defined. Ask your database administrator to set up a Firebird login. |
-902 |
335544478 |
jrn_enable |
enable journal for database before starting online dump |
-902 |
335544479 |
old_failure |
online dump failure. Retry dump |
-902 |
335544480 |
old_in_progress |
an online dump is already in progress |
-902 |
335544481 |
old_no_space |
no more disk/tape space. Cannot continue online dump |
-902 |
335544482 |
no_wal_no_jrn |
journaling allowed only if database has Write-ahead Log |
-902 |
335544483 |
num_old_files |
maximum number of online dump files that can be specified is 16 |
-902 |
335544484 |
wal_file_open |
error in opening Write-ahead Log file during recovery |
-902 |
335544486 |
wal_failure |
Write-ahead log subsystem failure |
-902 |
335544505 |
no_archive |
must specify archive file when enabling long term journal for databases with round-robin log files |
-902 |
335544506 |
shutinprog |
database @1 shutdown in progress |
-902 |
335544520 |
jrn_present |
long-term journaling already enabled |
-902 |
335544528 |
shutdown |
database @1 shutdown |
-902 |
335544557 |
shutfail |
database shutdown unsuccessful |
-902 |
335544564 |
no_jrn |
long-term journaling not enabled |
-902 |
335544569 |
dsql_error |
Dynamic SQL Error |
-902 |
335544653 |
psw_attach |
cannot attach to password database |
-902 |
335544654 |
psw_start_trans |
cannot start transaction for password database |
-902 |
335544717 |
err_stack_limit |
stack size insufficent to execute current request |
-902 |
335544721 |
network_error |
Unable to complete network request to host "@1". |
-902 |
335544722 |
net_connect_err |
Failed to establish a connection. |
-902 |
335544723 |
net_connect_listen_err |
Error while listening for an incoming connection. |
-902 |
335544724 |
net_event_connect_err |
Failed to establish a secondary connection for event processing. |
-902 |
335544725 |
net_event_listen_err |
Error while listening for an incoming event connection request. |
-902 |
335544726 |
net_read_err |
Error reading data from the connection. |
-902 |
335544727 |
net_write_err |
Error writing data to the connection. |
-902 |
335544732 |
unsupported_network_drive |
Access to databases on file servers is not supported. |
-902 |
335544733 |
io_create_err |
Error while trying to create file |
-902 |
335544734 |
io_open_err |
Error while trying to open file |
-902 |
335544735 |
io_close_err |
Error while trying to close file |
-902 |
335544736 |
io_read_err |
Error while trying to read from file |
-902 |
335544737 |
io_write_err |
Error while trying to write to file |
-902 |
335544738 |
io_delete_err |
Error while trying to delete file |
-902 |
335544739 |
io_access_err |
Error while trying to access file |
-902 |
335544745 |
login_same_as_role_name |
Your login @1 is same as one of the SQL role name. Ask your database administrator to set up a valid Firebird login. |
-902 |
335544791 |
file_in_use |
The file @1 is currently in use by another process. Try again later. |
-902 |
335544795 |
unexp_spb_form |
unexpected item in service parameter block, expected @1 |
-902 |
335544809 |
extern_func_dir_error |
Function @1 is in @2, which is not in a permitted directory for external functions. |
-902 |
335544819 |
io_32bit_exceeded_err |
File exceeded maximum size of 2GB. Add another database file or use a 64 bit I/O version of Firebird. |
-902 |
335544831 |
conf_access_denied |
Use of @1 at location @2 is not allowed by server configuration |
-902 |
335544834 |
cursor_not_open |
Cursor is not open |
-902 |
335544841 |
cursor_already_open |
Cursor is already open |
-902 |
335544856 |
att_shutdown |
connection shutdown |
-902 |
335544882 |
long_login |
Login name too long (@1 characters, maximum allowed @2) |
-902 |
335544936 |
psw_db_error |
Security database error |
-902 |
335544970 |
missing_required_spb |
Missing required item @1 in service parameter block |
-902 |
335544971 |
net_server_shutdown |
@1 server is shutdown |
-902 |
335544974 |
no_threads |
Could not start first worker thread - shutdown server |
-902 |
335544975 |
net_event_connect_timeout |
Timeout occurred while waiting for a secondary connection for event processing |
-902 |
335544984 |
instance_conflict |
Shared memory area is probably already created by another engine instance in another Windows session |
-902 |
335544987 |
no_trusted_spb |
Use of TRUSTED switches in spb_command_line is prohibited |
-902 |
335545029 |
missing_data_structures |
Install incomplete. To complete security database initialization please CREATE USER. For details read doc/README.security_database.txt. |
-902 |
335545030 |
protect_sys_tab |
@1 operation is not allowed for system table @2 |
-902 |
335545032 |
wroblrver2 |
unsupported BLR version (expected between @1 and @2, encountered @3) |
-902 |
335545043 |
decrypt_error |
Missing crypt plugin, but page appears encrypted |
-902 |
335545044 |
no_providers |
No providers loaded |
-902 |
335545053 |
miss_config |
Missing configuration file: @1 |
-902 |
335545054 |
conf_line |
@1: illegal line <@2> |
-902 |
335545055 |
conf_include |
Invalid include operator in @1 for <@2> |
-902 |
335545056 |
include_depth |
Include depth too big |
-902 |
335545057 |
include_miss |
File to include not found |
-902 |
335545060 |
sec_context |
Missing security context for @1 |
-902 |
335545061 |
multi_segment |
Missing segment @1 in multisegment connect block parameter |
-902 |
335545062 |
login_changed |
Different logins in connect and attach packets - client library error |
-902 |
335545063 |
auth_handshake_limit |
Exceeded exchange limit during authentication handshake |
-902 |
335545064 |
wirecrypt_incompatible |
Incompatible wire encryption levels requested on client and server |
-902 |
335545065 |
miss_wirecrypt |
Client attempted to attach unencrypted but wire encryption is required |
-902 |
335545066 |
wirecrypt_key |
Client attempted to start wire encryption using unknown key @1 |
-902 |
335545067 |
wirecrypt_plugin |
Client attempted to start wire encryption using unsupported plugin @1 |
-902 |
335545068 |
secdb_name |
Error getting security database name from configuration file |
-902 |
335545069 |
auth_data |
Client authentication plugin is missing required data from server |
-902 |
335545070 |
auth_datalength |
Client authentication plugin expected @2 bytes of @3 from server, got @1 |
-902 |
335545106 |
login_error |
Error occurred during login, please check server firebird.log for details |
-902 |
335545107 |
already_opened |
Database already opened with engine instance, incompatible with current |
-902 |
335545108 |
bad_crypt_key |
Invalid crypt key @1 |
-902 |
335545112 |
miss_prvlg |
System privilege @1 is missing |
-902 |
335545113 |
crypt_checksum |
Invalid or missing checksum of encrypted database |
-902 |
335545114 |
not_dba |
You must have SYSDBA rights at this server |
-902 |
335545126 |
sql_too_long |
SQL statement is too long. Maximum size is @1 bytes. |
-902 |
335545130 |
att_shut_killed |
Killed by database administrator. |
-902 |
335545131 |
att_shut_idle |
Idle timeout expired. |
-902 |
335545132 |
att_shut_db_down |
Database is shutdown. |
-902 |
335545133 |
att_shut_engine |
Engine is shutdown. |
-902 |
335545134 |
overriding_without_identity |
OVERRIDING clause can be used only when an identity column is present in the INSERT’s field list for table/view @1 |
-902 |
335545135 |
overriding_system_invalid |
OVERRIDING SYSTEM VALUE can be used only for identity column defined as 'GENERATED ALWAYS' in INSERT for table/view @1 |
-902 |
335545136 |
overriding_user_invalid |
OVERRIDING USER VALUE can be used only for identity column defined as 'GENERATED BY DEFAULT' in INSERT for table/view @1 |
-902 |
335545137 |
overriding_system_missing |
OVERRIDING SYSTEM VALUE should be used to override the value of an identity column defined as 'GENERATED ALWAYS' in table/view @1 |
-902 |
335545166 |
icu_entrypoint |
Missing entrypoint @1 in ICU library |
-902 |
335545167 |
icu_library |
Could not find acceptable ICU library |
-902 |
335545170 |
iconv_open |
Error opening international conversion descriptor from @1 to @2 |
-902 |
335545173 |
deflate_init |
Compression stream init error @1 |
-902 |
335545174 |
inflate_init |
Decompression stream init error @1 |
-902 |
335545180 |
non_plugin_protocol |
Plugin not supported by network protocol |
-902 |
335545200 |
map_event |
Error using events in mapping shared memory: @1 |
-902 |
335545201 |
map_overflow |
Global mapping memory overflow |
-902 |
335545204 |
db_crypt_key |
Missing database encryption key for your attachment |
-902 |
335545259 |
bind_err |
Error processing isc_dpb_set_bind clumplet "@1" |
-902 |
335545260 |
bind_statement |
The following statement failed: @1 |
-902 |
335545270 |
wrong_page |
RDB$PAGES written by non-system transaction, DB appears to be damaged |
-902 |
335545271 |
repl_error |
Replication error |
-902 |
335545272 |
ses_reset_failed |
Reset of user session failed. Connection is shut down. |
-902 |
335545273 |
block_size |
File size is less than expected |
-902 |
335545280 |
wrong_shmem_ver |
@1: inconsistent shared memory type/version; found @2, expected @3 |
-902 |
335545281 |
wrong_shmem_bitness |
@1-bit engine can’t open database already opened by @2-bit engine |
-902 |
335545287 |
idx_expr_not_found |
Definition of index expression is not found for index @1 |
-902 |
335545288 |
idx_cond_not_found |
Definition of index condition is not found for index @1 |
-904 |
335544324 |
bad_db_handle |
invalid database handle (no active connection) |
-904 |
335544375 |
unavailable |
unavailable database |
-904 |
335544381 |
imp_exc |
Implementation limit exceeded |
-904 |
335544386 |
nopoolids |
too many requests |
-904 |
335544389 |
bufexh |
buffer exhausted |
-904 |
335544391 |
bufinuse |
buffer in use |
-904 |
335544393 |
reqinuse |
request in use |
-904 |
335544424 |
no_lock_mgr |
no lock manager available |
-904 |
335544430 |
virmemexh |
unable to allocate memory from operating system |
-904 |
335544451 |
update_conflict |
update conflicts with concurrent update |
-904 |
335544453 |
obj_in_use |
object @1 is in use |
-904 |
335544455 |
shadow_accessed |
cannot attach active shadow file |
-904 |
335544460 |
shadow_missing |
a file in manual shadow @1 is unavailable |
-904 |
335544661 |
index_root_page_full |
cannot add index, index root page is full. |
-904 |
335544676 |
sort_mem_err |
sort error: not enough memory |
-904 |
335544683 |
req_depth_exceeded |
request depth exceeded. (Recursive definition?) |
-904 |
335544758 |
sort_rec_size_err |
sort record size of @1 bytes is too big |
-904 |
335544761 |
too_many_handles |
too many open handles to database |
-904 |
335544762 |
optimizer_blk_exc |
size of optimizer block exceeded |
-904 |
335544792 |
service_att_err |
Cannot attach to services manager |
-904 |
335544799 |
svc_name_missing |
The service name was not specified. |
-904 |
335544813 |
optimizer_between_err |
Unsupported field type specified in BETWEEN predicate. |
-904 |
335544827 |
exec_sql_invalid_arg |
Invalid argument in EXECUTE STATEMENT - cannot convert to string |
-904 |
335544828 |
exec_sql_invalid_req |
Wrong request type in EXECUTE STATEMENT '@1' |
-904 |
335544829 |
exec_sql_invalid_var |
Variable type (position @1) in EXECUTE STATEMENT '@2' INTO does not match returned column type |
-904 |
335544830 |
exec_sql_max_call_exceeded |
Too many recursion levels of EXECUTE STATEMENT |
-904 |
335544832 |
wrong_backup_state |
Cannot change difference file name while database is in backup mode |
-904 |
335544833 |
wal_backup_err |
Physical backup is not allowed while Write-Ahead Log is in use |
-904 |
335544852 |
partner_idx_incompat_type |
partner index segment no @1 has incompatible data type |
-904 |
335544857 |
blobtoobig |
Maximum BLOB size exceeded |
-904 |
335544862 |
record_lock_not_supp |
Stream does not support record locking |
-904 |
335544863 |
partner_idx_not_found |
Cannot create foreign key constraint @1. Partner index does not exist or is inactive. |
-904 |
335544864 |
tra_num_exc |
Transactions count exceeded. Perform backup and restore to make database operable again |
-904 |
335544865 |
field_disappeared |
Column has been unexpectedly deleted |
-904 |
335544878 |
concurrent_transaction |
concurrent transaction number is @1 |
-904 |
335544935 |
circular_computed |
Cannot have circular dependencies with computed fields |
-904 |
335544992 |
lock_dir_access |
Can not access lock files directory @1 |
-904 |
335545020 |
request_outdated |
Request can’t access new records in relation @1 and should be recompiled |
-904 |
335545096 |
read_conflict |
read conflicts with concurrent update |
-904 |
335545110 |
max_idx_depth |
Maximum index depth (@1 levels) is reached |
-906 |
335544452 |
unlicensed |
product @1 is not licensed |
-906 |
335544744 |
max_att_exceeded |
Maximum user count exceeded. Contact your database administrator. |
-909 |
335544667 |
drdb_completed_with_errs |
drop database completed with errors |
-911 |
335544459 |
rec_in_limbo |
record from transaction @1 is stuck in limbo |
-913 |
335544336 |
deadlock |
deadlock |
-922 |
335544323 |
bad_db_format |
file @1 is not a valid database |
-923 |
335544421 |
connect_reject |
connection rejected by remote interface |
-923 |
335544461 |
cant_validate |
secondary server attachments cannot validate databases |
-923 |
335544462 |
cant_start_journal |
secondary server attachments cannot start journaling |
-923 |
335544464 |
cant_start_logging |
secondary server attachments cannot start logging |
-924 |
335544325 |
bad_dpb_content |
bad parameters on attach or create database |
-924 |
335544433 |
journerr |
communication error with journal "@1" |
-924 |
335544441 |
bad_detach |
database detach completed with errors |
-924 |
335544648 |
conn_lost |
Connection lost to pipe server |
-924 |
335544972 |
bad_conn_str |
Invalid connection string |
-924 |
335545085 |
baddpb_damaged_mode |
Incompatible mode of attachment to damaged database |
-924 |
335545086 |
baddpb_buffers_range |
Attempt to set in database number of buffers which is out of acceptable range [@1:@2] |
-924 |
335545087 |
baddpb_temp_buffers |
Attempt to temporarily set number of buffers less than @1 |
-924 |
335545286 |
bad_par_workers |
Wrong parallel workers value @1, valid range are from 1 to @2 |
-926 |
335544447 |
no_rollback |
no rollback performed |
-999 |
335544689 |
ib_error |
Firebird error |
Appendix C: Reserved Words and Keywords
Reserved words are part of the Firebird SQL language. They cannot be used as identifiers (e.g. as table or procedure names), except when enclosed in double quotes in Dialect 3. However, you should avoid this unless you have a compelling reason.
Non-reserved keywords are also part of the language. They have a special meaning when used in the proper context, but they are not reserved for Firebird’s own and exclusive use. You can use them as identifiers without double-quoting.
Since Firebird 5.0, the reserved words and keywords can be queried from the virtual table RDB$KEYWORDS
.
Reserved words
Full list of reserved words in Firebird 5.0:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Keywords
The following terms have a special meaning in Firebird 5.0 SQL. This lists all keywords, reserved and non-reserved.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Appendix D: System Tables
When you create a database, the Firebird engine creates a lot of system tables. Metadata — the descriptions and attributes of all database objects — are stored in these system tables.
System table identifiers all begin with the prefix RDB$
.
RDB$AUTH_MAPPING
-
Stores authentication and other security mappings
RDB$BACKUP_HISTORY
-
History of backups performed using nBackup
RDB$CHARACTER_SETS
-
Names and describes the character sets available in the database
RDB$CHECK_CONSTRAINTS
-
Cross-references between the names of constraints (
NOT NULL
constraints,CHECK
constraints andON UPDATE
andON DELETE
clauses in foreign key constraints) and their associated system-generated triggers RDB$COLLATIONS
-
Collations for all character sets
RDB$CONFIG
-
Virtual table with configuration settings applied for the current database
RDB$DATABASE
-
Basic information about the database
RDB$DB_CREATORS
-
A list of users granted the
CREATE DATABASE
privilege when using the specified database as a security database RDB$DEPENDENCIES
-
Information about dependencies between database objects
RDB$EXCEPTIONS
-
Custom database exceptions
RDB$FIELDS
-
Column and domain definitions, both system and custom
RDB$FIELD_DIMENSIONS
-
Dimensions of array columns
RDB$FILES
-
Information about secondary files and shadow files
RDB$FILTERS
-
Information about BLOB filters
RDB$FORMATS
-
Information about changes in the formats of tables
RDB$FUNCTIONS
-
Information about external functions
RDB$FUNCTION_ARGUMENTS
-
Attributes of the parameters of external functions
RDB$GENERATORS
-
Information about generators (sequences)
RDB$INDEX_SEGMENTS
-
Segments and index positions
RDB$INDICES
-
Definitions of all indexes in the database (system- or user-defined)
RDB$LOG_FILES
-
Not used in the current version
RDB$PACKAGES
-
Stores the definition (header and body) of SQL packages
RDB$PAGES
-
Information about database pages
RDB$PROCEDURES
-
Definitions of stored procedures
RDB$PROCEDURE_PARAMETERS
-
Parameters of stored procedures
RDB$REF_CONSTRAINTS
-
Definitions of referential constraints (foreign keys)
RDB$RELATIONS
-
Headers of tables and views
RDB$RELATION_CONSTRAINTS
-
Definitions of all table-level constraints
RDB$RELATION_FIELDS
-
Top-level definitions of table columns
RDB$ROLES
-
Role definitions
RDB$SECURITY_CLASSES
-
Access control lists
RDB$TIME_ZONES
-
Time zones
RDB$TRANSACTIONS
-
State of multi-database transactions
RDB$TRIGGERS
-
Trigger definitions
RDB$TRIGGER_MESSAGES
-
Trigger messages
RDB$TYPES
-
Definitions of enumerated data types
RDB$USER_PRIVILEGES
-
SQL privileges granted to system users
RDB$VIEW_RELATIONS
-
Tables that are referred to in view definitions: one record for each table in a view
RDB$AUTH_MAPPING
RDB$AUTH_MAPPING
stores authentication and other security mappings.
Column Name | Data Type | Description |
---|---|---|
|
|
Name of the mapping |
|
|
Using definition:
|
|
|
Mapping applies for authentication information from this specific plugin |
|
|
Mapping applies for authentication information from this specific database |
|
|
The type of authentication object (defined by plugin) to map from, or |
|
|
The name of the authentication object to map from |
|
|
The type to map to
|
|
|
The name to map to |
|
|
Flag:
|
|
|
Optional description of the mapping (comment) |
RDB$BACKUP_HISTORY
RDB$BACKUP_HISTORY
stores the history of backups performed using the nBackup utility.
Column Name | Data Type | Description |
---|---|---|
|
|
The identifier assigned by the engine |
|
|
Backup date and time |
|
|
Backup level |
|
|
Unique identifier |
|
|
System (scan) number |
|
|
Full path and file name of backup file |
RDB$CHARACTER_SETS
RDB$CHARACTER_SETS
names and describes the character sets available in the database.
Column Name | Data Type | Description |
---|---|---|
|
|
Character set name |
|
|
Not used |
|
|
The number of characters in the set. Not used for existing character sets |
|
|
The name of the default collation for the character set |
|
|
Unique character set identifier |
|
|
System flag: value is 1 if the character set is defined in the system when the database is created; value is 0 for a user-defined character set |
|
|
Could store text description of the character set |
|
|
For a user-defined character set that is accessed via an external function, the name of the external function |
|
|
The maximum number of bytes representing one character |
|
|
May reference a security class defined in the table |
|
|
The username of the user who created the character set originally |
RDB$CHECK_CONSTRAINTS
RDB$CHECK_CONSTRAINTS
provides the cross-references between the names of system-generated triggers for constraints and the names of the associated constraints (NOT NULL
constraints, CHECK
constraints and the ON UPDATE
and ON DELETE
clauses in foreign key constraints).
Column Name | Data Type | Description |
---|---|---|
|
|
Constraint name, defined by the user or automatically generated by the system |
|
|
For |
RDB$COLLATIONS
RDB$COLLATIONS
stores collations for all character sets.
Column Name | Data Type | Description |
---|---|---|
|
|
Collation name |
|
|
Collation identifier. Together with the character set identifier, it is a unique collation identifier |
|
|
Character set identifier. Together with the collection sequence identifier, it is a unique identifier |
|
|
Collation attributes. It is a bit mask where the first bit shows whether trailing spaces should be taken into account in collations (0 - NO PAD; 1 - PAD SPACE); the second bit shows whether the collation is case-sensitive (0 - CASE SENSITIVE, 1 - CASE INSENSITIVE); the third bit shows whether the collation is accent-sensitive (0 - ACCENT SENSITIVE, 1 - ACCENT SENSITIVE). Thus, the value of 5 means that the collation does not take into account trailing spaces and is accent-insensitive |
|
|
Flag: the value of 0 means it is user-defined; the value of 1 means it is system-defined |
|
|
Could store text description of the collation |
|
|
Not currently used |
|
|
The name of the base collation for this collation |
|
|
Describes specific attributes |
|
|
May reference a security class defined in the table |
|
|
The username of the user who created the collation originally |
RDB$CONFIG
RDB$CONFIG
is a virtual table showing the configuration settings of the current database for the current connection.
Table RDB$CONFIG
is populated from in-memory structures upon request and its instance is preserved for the SQL query lifetime.
For security reasons, access to this table is allowed for administrators only.
Non-privileged users see no rows in this table (and no error is raised).
Column Name | Data Type | Description |
---|---|---|
|
|
Unique row identifier, no special meaning |
|
|
Setting name (e.g. |
|
|
Actual value of setting |
|
|
Default value of setting (defined in the Firebird code) |
|
|
TRUE if value is explicitly configured, FALSE when default |
|
|
Name of the configuration file (relative to the Firebird root directory) where this setting was taken from, or special value |
RDB$DATABASE
RDB$DATABASE
stores basic information about the database.
It contains only one record.
Column Name | Data Type | Description |
---|---|---|
|
|
Database comment text |
|
|
A number that steps up by one each time a table or view is added to the database |
|
|
The security class defined in |
|
|
The name of the default character set for the database set in the |
|
|
Number of seconds "delay" (established with the |
|
|
The default
|
RDB$DB_CREATORS
RDB$DB_CREATORS
contains a list of users granted the CREATE DATABASE
privilege when using the specified database as a security database.
Column Name | Data Type | Description |
---|---|---|
|
|
User or role name |
|
|
Type of user
|
RDB$DEPENDENCIES
RDB$DEPENDENCIES
stores the dependencies between database objects.
Column Name | Data Type | Description |
---|---|---|
|
|
The name of the view, procedure, trigger, |
|
|
The name of the object that the defined object — the table, view, procedure, trigger, |
|
|
The column name in the depended-on object that is referred to by the dependent view, procedure, trigger, |
|
|
Identifies the type of the dependent object:
|
|
|
Identifies the type of the object depended on:
|
|
|
The package of a procedure or function for which this describes the dependency. |
RDB$EXCEPTIONS
RDB$EXCEPTIONS
stores custom database exceptions.
Column Name | Data Type | Description |
---|---|---|
|
|
Custom exception name |
|
|
The unique number of the exception assigned by the system |
|
|
Exception message text |
|
|
Could store text description of the exception |
|
|
Flag:
|
|
|
May reference a security class defined in the table |
|
|
The username of the user who created the exception originally |
RDB$FIELDS
RDB$FIELDS stores definitions of columns and domains, both system and custom. This is where the detailed data attributes are stored for all columns.
The column |
Column Name | Data Type | Description |
---|---|---|
|
|
The unique name of the domain created by the user or of the domain automatically built for the table column by the system. System-created domain names start with the “RDB$” prefix |
|
|
Not used |
|
|
The binary language representation (BLR) of the expression of the |
|
|
The original source text in the SQL language specifying the check of the |
|
|
The binary language representation (BLR) of the expression of a |
|
|
The original source text of the expression that defines a |
|
|
The binary language representation (BLR) of the default value for the field or domain |
|
|
The default value in the source code, as an SQL constant or expression |
|
|
Column size in bytes.
|
|
|
The negative number that specifies the scale for |
|
|
Data type code for the column:
Codes for |
|
|
Specifies the subtype for the BLOB data type:
Specifies for the CHAR data type:
Specifies the particular data type for the integer data types (
|
|
|
Not used |
|
|
Not used |
|
|
Any domain (table column) comment text |
|
|
Flag: the value of 1 means the domain is automatically created by the system, the value of 0 means that the domain is defined by the user |
|
|
Not used |
|
|
Specifies the length of the |
|
|
Not used |
|
|
The length of the column in bytes if it belongs to an external table.
Always |
|
|
The scale factor of an integer-type field in an external table; represents the power of 10 by which the integer is multiplied |
|
|
The data type of the field as it is represented in an external table:
|
|
|
Defines the number of dimensions in an array if the column is defined as an array.
Always |
|
|
Specifies whether the column can take an empty value (the field will contain |
|
|
The length of |
|
|
The identifier of the collation for a character column or domain. If it is not defined, the value of the field will be 0 |
|
|
The identifier of the character set for a character column, |
|
|
Specifies the total number of digits for the fixed-point numeric data type ( |
|
|
May reference a security class defined in the table |
|
|
The username of the user who created the domain originally |
RDB$FIELD_DIMENSIONS
RDB$FIELD_DIMENSIONS
stores the dimensions of array columns.
Column Name | Data Type | Description |
---|---|---|
|
|
The name of the array column.
It must be present in the |
|
|
Identifies one dimension in the array column. The numbering of dimensions starts with 0 |
|
|
The lower bound of this dimension |
|
|
The upper bound of this dimension |
RDB$FILES
RDB$FILES
stores information about secondary files and shadow files.
Column Name | Data Type | Description |
---|---|---|
|
|
The full path to the file and the name of either
|
|
|
The sequential number of the secondary file in a sequence or of the shadow file in a shadow file set |
|
|
The initial page number in the secondary file or shadow file |
|
|
File length in database pages |
|
|
For internal use |
|
|
Shadow set number.
If the row describes a database secondary file, the field will be |
RDB$FILTERS
RDB$FILTERS
stores information about BLOB
filters.
Column Name | Data Type | Description |
---|---|---|
|
|
The unique identifier of the |
|
|
Documentation about the |
|
|
The name of the dynamic library or shared object where the code of the |
|
|
The exported name of the |
|
|
The |
|
|
The |
|
|
Flag indicating whether the filter is user-defined or internally defined:
|
|
|
May reference a security class defined in the table |
|
|
The username of the user who created the filter originally |
RDB$FORMATS
RDB$FORMATS
stores information about changes in tables.
Each time any metadata change to a table is committed, it gets a new format number.
When the format number of any table reaches 255, or any view 32,000, the entire database becomes inoperable.
To return to normal, the database must be backed up with the gbak utility and restored from that backup copy.
Column Name | Data Type | Description |
---|---|---|
|
|
Table or view identifier |
|
|
Table format identifier — maximum 255 for tables, 32,000 for views. The critical time comes when this number approaches 255 for any table or 32,000 for any view |
|
|
Stores column names and data attributes as |
RDB$FUNCTIONS
RDB$FUNCTIONS
stores the information needed by the engine about stored functions and external functions (user-defined functions, UDFs).
Column Name | Data Type | Description |
---|---|---|
|
|
The unique (declared) name of the external function |
|
|
Not currently used |
|
|
Not currently used |
|
|
Any text with comments related to the external function |
|
|
The name of the dynamic library or shared object where the code of the external function is located |
|
|
The exported name of the external function in the function library.
Note, this is often not the same as |
|
|
The position number of the returned argument in the list of parameters corresponding to input arguments |
|
|
Flag indicating whether the filter is user-defined or internally defined:
|
|
|
Engine for external functions.
|
|
|
Package that contains this function (or |
|
|
|
|
|
The PSQL sourcecode of the function |
|
|
Unique identifier of the function |
|
|
The binary language representation (BLR) of the function code (PSQL function only) |
|
|
Indicates whether the source PSQL of the stored procedure remains valid after the latest |
|
|
Contains debugging information about variables used in the function (PSQL function only) |
|
|
May reference a security class defined in the table |
|
|
The username of the user who created the function originally |
|
|
The legacy style attribute of the function.
|
|
|
Deterministic flag.
|
|
|
The
|
RDB$FUNCTION_ARGUMENTS
RDB$FUNCTION_ARGUMENTS
stores the parameters of functions and their attributes.
Column Name | Data Type | Description |
---|---|---|
|
|
The unique name (declared identifier) of the function |
|
|
The position of the argument in the list of arguments |
|
|
Flag: how this argument is passed:
Only for legacy external functions. |
|
|
Data type code defined for the column:
Only for legacy external functions. |
|
|
The scale of an integer or a fixed-point argument. It is an exponent of 10. Only for legacy external functions. |
|
|
Argument length in bytes:
Only for legacy external functions. |
|
|
Stores the Only for legacy external functions. |
|
|
The identifier of the character set for a character argument. Only for legacy external functions. |
|
|
The number of digits of precision available for the data type of the argument. Only for legacy external functions. |
|
|
The length of a Only for legacy external functions. |
|
|
Package name of the function (or |
|
|
Parameter name |
|
|
The name of the user-created domain, when a domain is referenced instead of a data type. If the name starts with the prefix “RDB$”, it is the name of the domain automatically generated by the system for the parameter. |
|
|
The binary language representation (BLR) of the default value of the parameter |
|
|
The default value for the parameter, in PSQL code |
|
|
The identifier of the collation used for a character parameter |
|
|
The flag indicating whether |
|
|
Parameter passing mechanism for non-legacy functions:
|
|
|
The name of the column the parameter references, if it was declared using |
|
|
The name of the table the parameter references, if it was declared using |
|
|
Flag:
|
|
|
Optional description of the function argument (comment) |
RDB$GENERATORS
RDB$GENERATORS
stores the metadata of sequences (generators).
Column Name | Data Type | Description |
---|---|---|
|
|
The unique name of the sequence |
|
|
The unique identifier assigned to the sequence by the system |
|
|
Flag:
|
|
|
Optional description of the sequence (comment) |
|
|
May reference a security class defined in the table |
|
|
The username of the user who created the sequence originally |
|
|
Stores the start value ( |
|
|
Stores the increment ( |
RDB$INDEX_SEGMENTS
RDB$INDEX_SEGMENTS
stores the segments (table columns) of indexes and their positions in the key.
A separate row is stored for each column in an index.
Column Name | Data Type | Description |
---|---|---|
|
|
The name of the index this segment is related to.
The master record is |
|
|
The name of a column belonging to the index, corresponding to an identifier for the table and that column in |
|
|
The column position in the index. Positions are numbered left-to-right, starting at zero |
|
|
The last known (calculated) selectivity of this column in the index. The higher the number, the lower the selectivity. |
RDB$INDICES
RDB$INDICES
stores definitions of both system- and user-defined indexes.
The attributes of each column belonging to an index are stored in one row of the table RDB$INDEX_SEGMENTS
.
Column Name | Data Type | Description |
---|---|---|
|
|
The unique name of the index specified by the user or automatically generated by the system |
|
|
The name of the table to which the index belongs.
It corresponds to an identifier in |
|
|
The internal (system) identifier of the index |
|
|
Specifies whether the index is unique:
|
|
|
Could store comments concerning the index |
|
|
The number of segments (columns) in the index |
|
|
Indicates whether the index is currently active:
|
|
|
Distinguishes between an ascending ( |
|
|
The name of the primary or unique key index referenced by the foreign key backed by this index;
|
|
|
Indicates whether the index is system-defined or user-defined:
|
|
|
The binary language representation (BLR) of the expression of an expression index, used for calculating the values for the index at runtime. |
|
|
The source code of the expression of an expression index |
|
|
Stores the last known selectivity of the entire index, calculated by execution of a |
|
|
The binary language representation (BLR) of the |
|
|
The source code of the |
RDB$KEYWORDS
RDB$KEYWORDS
is a virtual table listing the keywords used by the Firebird SQL parser.
If a keyword is reserved, it cannot be used as a regular identifier, but only as a delimited (quoted) identifier.
Column Name | Data Type | Description |
---|---|---|
|
|
Keyword |
|
|
|
RDB$PACKAGES
RDB$PACKAGES
stores the definition (header and body) of SQL packages.
Column Name | Data Type | Description |
---|---|---|
|
|
Name of the package |
|
|
The PSQL sourcecode of the package header |
|
|
The PSQL sourcecode of the package body |
|
|
Indicates whether the body of the package is still valid.
|
|
|
May reference a security class defined in the table |
|
|
The username of the user who created the package originally |
|
|
Flag:
|
|
|
Optional description of the package (comment) |
|
|
The
|
RDB$PAGES
RDB$PAGES
stores and maintains information about database pages and their usage.
Column Name | Data Type | Description |
---|---|---|
|
|
The unique number of a physically created database page |
|
|
The identifier of the table to which the page is allocated |
|
|
The number of the page in the sequence of all pages allocated to this table |
|
|
Indicates the page type (data, index, |
RDB$PROCEDURES
RDB$PROCEDURES
stores the definitions of stored procedures, including their PSQL source code and its binary language representation (BLR).
The next table, RDB$PROCEDURE_PARAMETERS
, stores the definitions of input and output parameters.
Column Name | Data Type | Description |
---|---|---|
|
|
Stored procedure name (identifier) |
|
|
The procedure’s unique, system-generated identifier |
|
|
Indicates the number of input parameters.
|
|
|
Indicates the number of output parameters.
|
|
|
Any text comments related to the procedure |
|
|
The PSQL source code of the procedure |
|
|
The binary language representation (BLR) of the procedure code (PSQL procedures only) |
|
|
May point to the security class defined in the system table |
|
|
The username of the procedure’s Owner — the user who was |
|
|
A metadata description of the procedure, used internally for optimization |
|
|
Indicates whether the procedure is defined by a user (value 0) or by the system (a value of 1 or greater) |
|
|
Procedure type:
* for procedures created before Firebird 1.5 |
|
|
Indicates whether the source PSQL of the stored procedure remains valid after the latest |
|
|
Contains debugging information about variables used in the stored procedure |
|
|
Engine for external functions.
|
|
|
The exported name of the external function in the procedure library.
Note, this is often not the same as |
|
|
Package name of the procedure (or |
|
|
|
|
|
The
|
RDB$PROCEDURE_PARAMETERS
RDB$PROCEDURE_PARAMETERS
stores the parameters of stored procedures and their attributes.
It holds one row for each parameter.
Column Name | Data Type | Description |
---|---|---|
|
|
Parameter name |
|
|
The name of the procedure where the parameter is defined |
|
|
The sequential number of the parameter |
|
|
Indicates whether the parameter is for input (value 0) or output (value 1) |
|
|
The name of the user-created domain, when a domain is referenced instead of a data type. If the name starts with the prefix “RDB$”, it is the name of the domain automatically generated by the system for the parameter. |
|
|
Could store comments related to the parameter |
|
|
Indicates whether the parameter was defined by the system (value or greater) or by a user (value 0) |
|
|
The binary language representation (BLR) of the default value of the parameter |
|
|
The default value for the parameter, in PSQL code |
|
|
The identifier of the collation used for a character parameter |
|
|
The flag indicating whether |
|
|
Flag: indicates how this parameter is passed:
|
|
|
The name of the column the parameter references, if it was declared using |
|
|
The name of the table the parameter references, if it was declared using |
|
|
Package name of the procedure (or |
RDB$PUBLICATIONS
RDB$PUBLICATIONS
stores the replication publications defined in the database.
Column Name | Data Type | Description |
---|---|---|
|
|
Publication name |
|
|
The username of the user who created the publication |
|
|
Flag:
|
|
|
Inactive ( |
|
|
Automatically add new tables to publication:
|
In Firebird 5.0 there is a single (pre-defined) publication named |
RDB$PUBLICATION_TABLES
RDB$PUBLICATION_TABLES
stores the names of tables that are replicated as part of a publication.
Column Name | Data Type | Description |
---|---|---|
|
|
Publication name |
|
|
Table name |
RDB$REF_CONSTRAINTS
RDB$REF_CONSTRAINTS
stores the attributes of the referential constraints — Foreign Key relationships and referential actions.
Column Name | Data Type | Description |
---|---|---|
|
|
Foreign key constraint name, defined by the user or automatically generated by the system |
|
|
The name of the primary or unique key constraint linked by the |
|
|
Not used.
The current value is |
|
|
Referential integrity actions applied to the foreign key record(s) when the primary (unique) key of the parent table is updated: |
|
|
Referential integrity actions applied to the foreign key record(s) when the primary (unique) key of the parent table is deleted: |
RDB$RELATIONS
RDB$RELATIONS
stores the top-level definitions and attributes of all tables and views in the system.
Column Name | Data Type | Description |
---|---|---|
|
|
The binary language representation (BLR) of the query specification of a view.
The field stores |
|
|
Contains the original source text of the query for a view, in SQL language.
User comments are included.
The field stores |
|
|
Could store comments related to the table or view |
|
|
Internal identifier of the table or view |
|
|
indicates whether the table or view is user-defined (value 0) or system-defined (value 1 or greater) |
|
|
The total length of the database key. For a table: 8 bytes. For a view, the length is 8 multiplied by the number of tables referenced by the view |
|
|
Internal use, points to the relation’s record in |
|
|
The field ID for the next column to be added. The number is not decremented when a column is dropped. |
|
|
Table or view name |
|
|
May reference a security class defined in the table |
|
|
The full path to the external data file if the table is defined with the |
|
|
Table metadata description, used internally for optimization |
|
|
Could store comments related to the external file of an external table |
|
|
The username of the user who created the table or view originally |
|
|
Default security class, used when a new column is added to the table |
|
|
Internal flags |
|
|
The type of the relation object being described:
|
|
|
The
|
RDB$RELATION_CONSTRAINTS
RDB$RELATION_CONSTRAINTS
stores the definitions of all table-level constraints: primary, unique, foreign key, CHECK
, NOT NULL
constraints.
Column Name | Data Type | Description |
---|---|---|
|
|
The name of the table-level constraint defined by the user, or otherwise automatically generated by the system |
|
|
The name of the constraint type: |
|
|
The name of the table this constraint applies to |
|
|
Currently |
|
|
Currently |
|
|
The name of the index that supports this constraint.
For a |
RDB$RELATION_FIELDS
RDB$RELATION_FIELDS
stores the definitions of table and view columns.
Column Name | Data Type | Description |
---|---|---|
|
|
Column name |
|
|
The name of the table or view that the column belongs to |
|
|
Domain name on which the column is based, either a user-defined one specified in the table definition or one created automatically by the system using the set of attributes defined.
The attributes are in the table |
|
|
Not currently used |
|
|
Only populated for a view, it is the name of the column from the base table |
|
|
Not used |
|
|
The zero-based ordinal position of the column in the table or view, numbering from left to right |
|
|
Not used |
|
|
Indicates whether the column is a regular one (value 1) or a computed one (value 0) |
|
|
An ID assigned from |
|
|
For a view column, the internal identifier of the base table from which this field derives |
|
|
Comments related to the table or view column |
|
|
The binary language representation (BLR) of the default value of the column |
|
|
Indicates whether the column is user-defined (value 0) or system-defined (value 1 or greater) |
|
|
May reference a security class defined in |
|
|
Not used |
|
|
Indicates whether the column is nullable ( |
|
|
The source text of the |
|
|
The identifier of the collation in the character set for the column, if it is not the default collation |
|
|
Internal generator name for generating an identity value for the column. |
|
|
The identity type of the column
|
RDB$ROLES
RDB$ROLES
stores the roles that have been defined in this database.
Column Name | Data Type | Description |
---|---|---|
|
|
Role name |
|
|
The username of the role owner |
|
|
Could store comments related to the role |
|
|
System flag |
|
|
May reference a security class defined in the table |
|
|
Bitset with the system privileges granted to a role, with the following bits 0 - unused |
RDB$SECURITY_CLASSES
RDB$SECURITY_CLASSES
stores the access control lists
Column Name | Data Type | Description |
---|---|---|
|
|
Security class name |
|
|
The access control list related to the security class. It enumerates users and their privileges |
|
|
Could store comments related to the security class |
RDB$TIME_ZONES
RDB$TIME_ZONES
lists the named time zones supported by the engine.
It is a virtual table that is populated using the current time zone database of the Firebird engine.
Column Name | Data Type | Description |
---|---|---|
|
|
The unique identifier of the time zone as used by Firebird.
For example, this identifier is used in the |
|
|
Name of the time zone as specified by the time zone database |
RDB$TRANSACTIONS
RDB$TRANSACTIONS
stores the states of distributed transactions and other transactions that were prepared for two-phase commit with an explicit prepare message.
Column Name | Data Type | Description |
---|---|---|
|
|
The unique identifier of the transaction being tracked |
|
|
Transaction state:
|
|
|
Not used |
|
|
Describes the prepared transaction and could be a custom message supplied to |
RDB$TRIGGERS
RDB$TRIGGERS
stores the trigger definitions for all tables and views.
Column Name | Data Type | Description |
---|---|---|
|
|
Trigger name |
|
|
The name of the table or view the trigger applies to. NULL if the trigger is applicable to a database event (“database trigger”) |
|
|
Position of this trigger in the sequence. Zero usually means that no sequence position is specified |
|
|
The event the trigger fires on, see |
|
|
Stores the source code of the trigger in PSQL |
|
|
The binary language representation (BLR) of the trigger code (PSQL trigger only) |
|
|
Trigger comment text |
|
|
Indicates whether the trigger is currently inactive (1) or active (0) |
|
|
Flag: indicates whether the trigger is user-defined (value 0) or system-defined (value 1 or greater) |
|
|
Internal use |
|
|
Indicates whether the text of the trigger remains valid after the latest modification by the |
|
|
Contains debugging information about variables used in the trigger |
|
|
Engine for external triggers.
|
|
|
The exported name of the external trigger in the trigger library.
Note, this is often not the same as |
|
|
The
|
RDB$TRIGGER_TYPE
Value
The value of RDB$TRIGGER_TYPE
is built from:
1
|
before insert |
2
|
after insert |
3
|
before update |
4
|
after update |
5
|
before delete |
6
|
after delete |
17
|
before insert or update |
18
|
after insert or update |
25
|
before insert or delete |
26
|
after insert or delete |
27
|
before update or delete |
28
|
after update or delete |
113
|
before insert or update or delete |
114
|
after insert or update or delete |
8192
|
on connect |
8193
|
on disconnect |
8194
|
on transaction start |
8195
|
on transaction commit |
8196
|
on transaction rollback |
Identification of the exact RDB$TRIGGER_TYPE
code is a little more complicated, since it is a bitmap, calculated according to which phase and events are covered and the order in which they are defined.
For the curious, the calculation is explained in this code comment by Mark Rotteveel.
For DDL triggers, the trigger type is obtained by bitwise OR above the event phase (0
— BEFORE, 1
— AFTER) and all listed types events:
0x0000000000004002
|
|
0x0000000000004004
|
|
0x0000000000004008
|
|
0x0000000000004010
|
|
0x0000000000004020
|
|
0x0000000000004040
|
|
0x0000000000004080
|
|
0x0000000000004100
|
|
0x0000000000004200
|
|
0x0000000000004400
|
|
0x0000000000004800
|
|
0x0000000000005000
|
|
0x0000000000014000
|
|
0x0000000000024000
|
|
0x0000000000044000
|
|
0x0000000000084000
|
|
0x0000000000104000
|
|
0x0000000000204000
|
|
0x0000000000404000
|
|
0x0000000000804000
|
|
0x0000000001004000
|
|
0x0000000002004000
|
|
0x0000000004004000
|
|
0x0000000008004000
|
|
0x0000000010004000
|
|
0x0000000020004000
|
|
0x0000000040004000
|
|
0x0000000080004000
|
|
0x0000000100004000
|
|
0x0000000200004000
|
|
0x0000000400004000
|
|
0x0000000800004000
|
|
0x0000001000004000
|
|
0x0000002000004000
|
|
0x0000004000004000
|
|
0x0000008000004000
|
|
0x0000010000004000
|
|
0x0000020000004000
|
|
0x0000040000004000
|
|
0x0000080000004000
|
|
0x0000100000004000
|
|
0x0000200000004000
|
|
0x0000400000004000
|
|
0x0000800000004000
|
|
0x7FFFFFFFFFFFDFFE
|
|
For example a trigger with
BEFORE CREATE PROCEDURE OR CREATE FUNCTION
will be of type 0x0000000000004090
,
AFTER CREATE PROCEDURE OR CREATE FUNCTION
— 0x0000000000004091
,
BEFORE DROP FUNCTION OR DROP EXCEPTION
— 0x00000000000044200
,
AFTER DROP FUNCTION OR DROP EXCEPTION
— 0x00000000000044201
,
BEFORE DROP TRIGGER OR DROP DOMAIN
— 0x00000000001005000
,
AFTER DROP TRIGGER OR DROP DOMAIN
— 0x00000000001005001
.
RDB$TRIGGER_MESSAGES
RDB$TRIGGER_MESSAGES
stores the trigger messages.
Column Name | Data Type | Description |
---|---|---|
|
|
The name of the trigger the message is associated with |
|
|
The number of the message within this trigger (from 1 to 32,767) |
|
|
Text of the trigger message |
RDB$TYPES
RDB$TYPES
stores the defining sets of enumerated types used throughout the system.
Column Name | Data Type | Description |
---|---|---|
|
|
Enumerated type name. Each type name masters its own set of types, e.g., object types, data types, character sets, trigger types, blob subtypes, etc. |
|
|
The object type identifier.
A unique series of numbers is used within each separate enumerated type.
For example, for the
|
|
|
The name of a member of an enumerated type, e.g., TABLE, VIEW, TRIGGER, etc. in the example above.
In the |
|
|
Any text comments related to the enumerated type |
|
|
Flag: indicates whether the type-member is user-defined (value 0) or system-defined (value 1 or greater) |
RDB$USER_PRIVILEGES
RDB$USER_PRIVILEGES
stores the SQL access privileges for Firebird users and privileged objects.
Column Name | Data Type | Description |
---|---|---|
|
|
The user or object that is granted this privilege |
|
|
The user who grants the privilege |
|
|
The privilege granted hereby:
|
|
|
Whether the WITH GRANT OPTION authority is included with the privilege:
|
|
|
The name of the object (table, view, procedure or role) the privilege is granted ON |
|
|
The name of the column the privilege is applicable to, for a column-level privilege (an |
|
|
Identifies the type of user the privilege is granted TO (a user, a procedure, a view, etc.) |
|
|
Identifies the type of the object the privilege is granted ON
|
RDB$VIEW_RELATIONS
RDB$VIEW_RELATIONS
stores the tables that are referred to in view definitions.
There is one record for each table in a view.
Column Name | Data Type | Description |
---|---|---|
|
|
View name |
|
|
The name of the table, view or stored procedure the view references |
|
|
The alias used to reference the view column in the BLR code of the query definition |
|
|
The text associated with the alias reported in the |
|
|
Context type:
|
|
|
Package name for a stored procedure in a package |
Appendix E: Monitoring Tables
The Firebird engine can monitor activities in a database and make them available for user queries via the monitoring tables.
The definitions of these tables are always present in the database, all named with the prefix MON$
.
The tables are virtual: they are populated with data only at the moment when the user queries them.
That is also one good reason why it is no use trying to create triggers for them!
The key notion in understanding the monitoring feature is an activity snapshot. The activity snapshot represents the current state of the database at the start of the transaction in which the monitoring table query runs. It delivers a lot of information about the database itself, active connections, users, transactions prepared, running queries and more.
The snapshot is created when any monitoring table is queried for the first time.
It is preserved until the end of the current transaction to maintain a stable, consistent view for queries across multiple tables, such as a master-detail query.
In other words, monitoring tables always behave as though they were in SNAPSHOT TABLE STABILITY
(“consistency”) isolation, even if the current transaction is started with a lower isolation level.
To refresh the snapshot, the current transaction must be completed and the monitoring tables must be re-queried in a new transaction context.
-
SYSDBA and the database owner have full access to all information available from the monitoring tables
-
Regular users can see information about their own connections; other connections are not visible to them
In a highly loaded environment, collecting information via the monitoring tables could have a negative impact on system performance. |
MON$ATTACHMENTS
-
Information about active attachments to the database
MON$CALL_STACK
-
Calls to the stack by active queries of stored procedures and triggers
MON$COMPILED_STATEMENTS
-
Virtual table listing compiled statements
MON$CONTEXT_VARIABLES
-
Information about custom context variables
MON$DATABASE
-
Information about the database to which the
CURRENT_CONNECTION
is attached MON$IO_STATS
-
Input/output statistics
MON$MEMORY_USAGE
-
Memory usage statistics
MON$RECORD_STATS
-
Record-level statistics
MON$STATEMENTS
-
Statements prepared for execution
MON$TABLE_STATS
-
Table-level statistics
MON$TRANSACTIONS
-
Started transactions
MON$ATTACHMENTS
MON$ATTACHMENTS
displays information about active attachments to the database.
Column Name | Data Type | Description |
---|---|---|
|
|
Connection identifier |
|
|
Server process identifier |
|
|
Connection state:
|
|
|
Connection string — the file name and full path to the primary database file |
|
|
The name of the user who is using this connection |
|
|
The role name specified when the connection was established.
If no role was specified when the connection was established, the field contains the text |
|
|
Remote protocol name |
|
|
Remote address (address and server name) |
|
|
Remote client process identifier |
|
|
Connection character set identifier (see |
|
|
The date and time when the connection was started |
|
|
Garbage collection flag (as specified in the attachment’s DPB): 1=allowed, 0=not allowed |
|
|
The full file name and path to the executable file that established this connection |
|
|
Statistics identifier |
|
|
Client library version |
|
|
Remote protocol version |
|
|
Name of the remote host |
|
|
Name of remote user |
|
|
Name of authentication plugin used to connect |
|
|
Flag that indicates the type of connection:
|
|
|
Connection-level idle timeout in seconds.
When |
|
|
Idle timer expiration time |
|
|
Connection-level statement timeout in milliseconds.
When |
|
|
Wire compression active (TRUE) or inactive (FALSE) |
|
|
Wire encryption active (TRUE) or inactive (FALSE) |
|
|
Name of the wire encryption plugin used |
|
|
Name of the session time zone |
|
|
Maximum number of parallel workers for this connection, |
SELECT MON$USER, MON$REMOTE_ADDRESS, MON$REMOTE_PID, MON$TIMESTAMP
FROM MON$ATTACHMENTS
WHERE MON$ATTACHMENT_ID <> CURRENT_CONNECTION
Using MON$ATTACHMENTS
to Kill a Connection
Monitoring tables are read-only.
However, the server has a built-in mechanism for deleting (and only deleting) records in the MON$ATTACHMENTS
table, which makes it possible to close a connection to the database.
-
All the current activity in the connection being deleted is immediately stopped and all active transactions are rolled back
-
The closed connection will return an error with the
isc_att_shutdown
code to the application -
Subsequent attempts to use this connection (i.e. use its handle in API calls) will return errors
Termination of system connections (MON$SYSTEM_FLAG = 1
) is not possible.
The server will skip system connections in a DELETE FROM MON$ATTACHMENTS
.
DELETE FROM MON$ATTACHMENTS
WHERE MON$ATTACHMENT_ID <> CURRENT_CONNECTION
MON$COMPILED_STATEMENTS
Virtual table listing compiled statements.
Column Name | Data Type | Description |
---|---|---|
|
|
Compiled statement id |
|
|
Statement text |
|
|
Explained query plan |
|
|
PSQL object name |
|
|
PSQL object type:
|
|
|
PSQL object package name |
|
|
Statistics identifier |
MON$CALL_STACK
MON$CALL_STACK
displays calls to the stack from queries executing in stored procedures and triggers.
Column Name | Data Type | Description |
---|---|---|
|
|
Call identifier |
|
|
The identifier of the top-level SQL statement, the one that initiated the chain of calls.
Use this identifier to find the records about the active statement in the |
|
|
The identifier of the calling trigger or stored procedure |
|
|
PSQL object name |
|
|
PSQL object type:
|
|
|
The date and time when the call was started |
|
|
The number of the source line in the SQL statement being executed at the moment of the snapshot |
|
|
The number of the source column in the SQL statement being executed at the moment of the snapshot |
|
|
Statistics identifier |
|
|
Package name for stored procedures or functions in a package |
|
|
Compiled statement id |
Information about calls during the execution of an EXECUTE STATEMENT
statement are not reported in the call stack.
WITH RECURSIVE
HEAD AS (
SELECT
CALL.MON$STATEMENT_ID, CALL.MON$CALL_ID,
CALL.MON$OBJECT_NAME, CALL.MON$OBJECT_TYPE
FROM MON$CALL_STACK CALL
WHERE CALL.MON$CALLER_ID IS NULL
UNION ALL
SELECT
CALL.MON$STATEMENT_ID, CALL.MON$CALL_ID,
CALL.MON$OBJECT_NAME, CALL.MON$OBJECT_TYPE
FROM MON$CALL_STACK CALL
JOIN HEAD ON CALL.MON$CALLER_ID = HEAD.MON$CALL_ID
)
SELECT MON$ATTACHMENT_ID, MON$OBJECT_NAME, MON$OBJECT_TYPE
FROM HEAD
JOIN MON$STATEMENTS STMT ON STMT.MON$STATEMENT_ID = HEAD.MON$STATEMENT_ID
WHERE STMT.MON$ATTACHMENT_ID <> CURRENT_CONNECTION
MON$CONTEXT_VARIABLES
MON$CONTEXT_VARIABLES
displays information about custom context variables.
Column Name | Data Type | Description |
---|---|---|
|
|
Connection identifier.
It contains a valid value only for a connection-level context variable.
For transaction-level variables it is |
|
|
Transaction identifier.
It contains a valid value only for transaction-level context variables.
For connection-level variables it is |
|
|
Context variable name |
|
|
Context variable value |
SELECT
VAR.MON$VARIABLE_NAME,
VAR.MON$VARIABLE_VALUE
FROM MON$CONTEXT_VARIABLES VAR
WHERE VAR.MON$ATTACHMENT_ID = CURRENT_CONNECTION
MON$DATABASE
MON$DATABASE
displays the header information from the database the current user is connected to.
Column Name | Data Type | Description |
---|---|---|
|
|
The file name and full path of the primary database file, or the database alias |
|
|
Database page size in bytes |
|
|
Major ODS version, e.g., 11 |
|
|
Minor ODS version, e.g., 2 |
|
|
The number of the oldest [interesting] transaction (OIT) |
|
|
The number of the oldest active transaction (OAT) |
|
|
The number of the transaction that was active at the moment when the OAT was started — oldest snapshot transaction (OST) |
|
|
The number of the next transaction, as it stood when the monitoring snapshot was taken |
|
|
The number of pages allocated in RAM for the database page cache |
|
|
Database SQL Dialect: 1 or 3 |
|
|
The current shutdown state of the database:
|
|
|
Sweep interval |
|
|
Flag indicating whether the database is read-only (value 1) or read-write (value 0) |
|
|
Indicates whether the write mode of the database is set for synchronous write (forced writes ON, value is 1) or asynchronous write (forced writes OFF, value is 0) |
|
|
The flag indicating reserve_space (value 1) or use_all_space (value 0) for filling database pages |
|
|
The date and time when the database was created or was last restored |
|
|
The number of pages allocated for the database on an external device |
|
|
Statistics identifier |
|
|
Current physical backup (nBackup) state:
|
|
|
Number of encrypted pages |
|
|
Username of the database owner |
|
|
Displays what type of security database is used:
|
|
|
Current state of database encryption
|
|
|
Database GUID (persistent until restore/fixup) |
|
|
Unique ID of the database file at the filesystem level |
|
|
Current value of the next attachment ID counter |
|
|
Current value of the next statement ID counter |
|
|
Database replica mode
|
MON$IO_STATS
MON$IO_STATS
displays input/output statistics.
The counters are cumulative, by group, for each group of statistics.
Column Name | Data Type | Description |
---|---|---|
|
|
Statistics identifier |
|
|
Statistics group:
|
|
|
Count of database pages read |
|
|
Count of database pages written to |
|
|
Count of database pages fetched |
|
|
Count of database pages marked |
MON$MEMORY_USAGE
MON$MEMORY_USAGE
displays memory usage statistics.
Column Name | Data Type | Description |
---|---|---|
|
|
Statistics identifier |
|
|
Statistics group:
|
|
|
The amount of memory in use, in bytes. This data is about the high-level memory allocation performed by the server. It can be useful to track down memory leaks and excessive memory usage in connections, procedures, etc. |
|
|
The amount of memory allocated by the operating system, in bytes. This data is about the low-level memory allocation performed by the Firebird memory manager — the amount of memory allocated by the operating system — which can allow you to control the physical memory usage. |
|
|
The maximum number of bytes used by this object |
|
|
The maximum number of bytes allocated for this object by the operating system |
Counters associated with database-level records MON$DATABASE
(MON$STAT_GROUP = 0
), display memory allocation for all connections.
In Classic and SuperClassic, zero values of the counters indicate that these architectures have no common cache.
Minor memory allocations are not accrued here but are added to the database memory pool instead.
SELECT
STMT.MON$ATTACHMENT_ID,
STMT.MON$SQL_TEXT,
MEM.MON$MEMORY_USED
FROM MON$MEMORY_USAGE MEM
NATURAL JOIN MON$STATEMENTS STMT
ORDER BY MEM.MON$MEMORY_USED DESC
FETCH FIRST 10 ROWS ONLY
MON$RECORD_STATS
MON$RECORD_STATS
displays record-level statistics.
The counters are cumulative, by group, for each group of statistics.
Column Name | Data Type | Description |
---|---|---|
|
|
Statistics identifier |
|
|
Statistics group:
|
|
|
Count of records read sequentially |
|
|
Count of records read via an index |
|
|
Count of inserted records |
|
|
Count of updated records |
|
|
Count of deleted records |
|
|
Count of records backed out |
|
|
Count of records purged |
|
|
Count of records expunged |
|
|
Number of records locked |
|
|
Number of update, delete or lock attempts on records owned by other active transactions.
Transaction is in |
|
|
Number of unsuccessful update, delete or lock attempts on records owned by other active transactions. These are reported as update conflicts. |
|
|
Number of back-versions read to find visible records |
|
|
Number of fragmented records read |
|
|
Number of repeated reads of records |
|
|
Number of records processed by the intermediate garbage collector |
MON$STATEMENTS
MON$STATEMENTS
displays statements prepared for execution.
Column Name | Data Type | Description |
---|---|---|
|
|
Statement identifier |
|
|
Connection identifier |
|
|
Transaction identifier |
|
|
Statement state:
|
|
|
The date and time when the statement was prepared |
|
|
Statement text in SQL |
|
|
Statistics identifier |
|
|
Explained execution plan |
|
|
Connection-level statement timeout in milliseconds.
When |
|
|
Statement timer expiration time |
|
|
Compiled statement id |
The STALLED state indicates that, at the time of the snapshot, the statement had an open cursor and was waiting for the client to resume fetching rows.
SELECT
ATT.MON$USER,
ATT.MON$REMOTE_ADDRESS,
STMT.MON$SQL_TEXT,
STMT.MON$TIMESTAMP
FROM MON$ATTACHMENTS ATT
JOIN MON$STATEMENTS STMT ON ATT.MON$ATTACHMENT_ID = STMT.MON$ATTACHMENT_ID
WHERE ATT.MON$ATTACHMENT_ID <> CURRENT_CONNECTION
AND STMT.MON$STATE = 1
Using MON$STATEMENTS
to Cancel a Query
Monitoring tables are read-only.
However, the server has a built-in mechanism for deleting (and only deleting) records in the MON$STATEMENTS
table, which makes it possible to cancel a running query.
-
If no statements are currently being executed in the connection, any attempt to cancel queries will not proceed
-
After a query is cancelled, calling execute/fetch API functions will return an error with the
isc_cancelled
code -
Subsequent queries from this connection will proceed as normal
-
Cancellation of the statement does not occur synchronously, it only marks the request for cancellation, and the cancellation itself is done asynchronously by the server
Cancelling all active queries for the specified connection:
DELETE FROM MON$STATEMENTS
WHERE MON$ATTACHMENT_ID = 32
MON$TABLE_STATS
MON$TABLE_STATS
reports table-level statistics.
Column Name | Data Type | Description |
---|---|---|
|
|
Statistics identifier |
|
|
Statistics group:
|
|
|
Name of the table |
|
|
Link to |
SELECT
t.mon$table_name,
r.mon$record_inserts,
r.mon$record_updates,
r.mon$record_deletes,
r.mon$record_backouts,
r.mon$record_purges,
r.mon$record_expunges,
------------------------
r.mon$record_seq_reads,
r.mon$record_idx_reads,
r.mon$record_rpt_reads,
r.mon$backversion_reads,
r.mon$fragment_reads,
------------------------
r.mon$record_locks,
r.mon$record_waits,
r.mon$record_conflicts,
------------------------
a.mon$stat_id
FROM mon$record_stats r
JOIN mon$table_stats t ON r.mon$stat_id = t.mon$record_stat_id
JOIN mon$attachments a ON t.mon$stat_id = a.mon$stat_id
WHERE a.mon$attachment_id = CURRENT_CONNECTION
MON$TRANSACTIONS
MON$TRANSACTIONS
reports started transactions.
Column Name | Data Type | Description |
---|---|---|
|
|
Transaction identifier (number) |
|
|
Connection identifier |
|
|
Transaction state:
|
|
|
The date and time when the transaction was started |
|
|
Top-level transaction identifier (number) |
|
|
Transaction ID of the oldest [interesting] transaction (OIT) |
|
|
Transaction ID of the oldest active transaction (OAT) |
|
|
Isolation mode (level):
|
|
|
Lock timeout:
|
|
|
Flag indicating whether the transaction is read-only (value 1) or read-write (value 0) |
|
|
Flag indicating whether automatic commit is used for the transaction (value 1) or not (value 0) |
|
|
Flag indicating whether the logging mechanism automatic undo is used for the transaction (value 1) or not (value 0) |
|
|
Statistics identifier |
SELECT DISTINCT a. *
FROM mon$attachments a
JOIN mon$transactions t ON a.mon$attachment_id = t.mon$attachment_id
WHERE NOT (t.mon$read_only = 1 AND t.mon$isolation_mode >= 2)
Appendix F: Security tables
The names of the security tables have SEC$
as prefix.
They display data from the current security database.
These tables are virtual in the sense that before access by the user, no data is recorded in them.
They are filled with data at the time of user request.
However, the descriptions of these tables are constantly present in the database.
In this sense, these virtual tables are like tables of the MON$
-family used to monitor the server.
-
SYSDBA, users with the
RDB$ADMIN
role in the security database and the current database, and the owner of the security database have full access to all information provided by the security tables. -
Regular users can only see information on themselves, other users are not visible.
These features are highly dependent on the user management plugin. Keep in mind that some options are ignored when using a legacy control plugin users. |
SEC$DB_CREATORS
-
Lists users and roles granted the
CREATE DATABASE
privilege SEC$GLOBAL_AUTH_MAPPING
-
Information about global authentication mappings
SEC$USERS
-
Lists users in the current security database
SEC$USER_ATTRIBUTES
-
Additional attributes of users
SEC$DB_CREATORS
Lists users and roles granted the CREATE DATABASE
privilege.
Column Name | Data Type | Description |
---|---|---|
|
|
Name of the user or role |
|
|
Type of user:
|
SEC$GLOBAL_AUTH_MAPPING
Lists users and roles granted the CREATE DATABASE
privilege.
Column Name | Data Type | Description |
---|---|---|
|
|
Name of the mapping |
|
|
Using definition:
|
|
|
Mapping applies for authentication information from this specific plugin |
|
|
Mapping applies for authentication information from this specific database |
|
|
The type of authentication object (defined by plugin) to map from, or |
|
|
The name of the authentication object to map from |
|
|
The type to map to
|
|
|
The name to map to |
|
|
Comment on the mapping |
SEC$USERS
Lists users in the current security database.
Column Name | Data Type | Description |
---|---|---|
|
|
Username |
|
|
First name |
|
|
Middle name |
|
|
Last name |
|
|
|
|
|
|
|
|
Description (comment) on the user |
|
|
Authentication plugin name that manages this user |
Multiple users can exist with the same username, each managed by a different authentication plugin. |
SEC$USER_ATTRIBUTES
Additional attributes of users
Column Name | Data Type | Description |
---|---|---|
|
|
Username |
|
|
Attribute name |
|
|
Attribute value |
|
|
Authentication plugin name that manages this user |
SELECT
U.SEC$USER_NAME AS LOGIN,
A.SEC$KEY AS TAG,
A.SEC$VALUE AS "VALUE",
U.SEC$PLUGIN AS "PLUGIN"
FROM SEC$USERS U
LEFT JOIN SEC$USER_ATTRIBUTES A
ON U.SEC$USER_NAME = A.SEC$USER_NAME
AND U.SEC$PLUGIN = A.SEC$PLUGIN;
LOGIN TAG VALUE PLUGIN
======== ======= ======= ===================
SYSDBA <null> <null> Srp
ALEX B x Srp
ALEX C sample Srp
SYSDBA <null> <null> Legacy_UserManager
Appendix G: Plugin tables
Plugin tables are tables — or views — created for or by various plugins to the Firebird engine.
The standard plugin tables have the prefix PLG$
.
The plugin tables do not always exist. For example, some tables only exist in the security database, and other tables will only be created on first use of a plugin. This appendix only documents plugin tables which are created by plugins included in a standard Firebird 5.0 deployment. Plugin tables are not considered system tables. |
Profiler table names are plugin-specific
The tables listed in this appendix for the profiler (starting with |
PLG$PROF_CURSORS
-
Profiler information on cursors
PLG$PROF_PSQL_STATS
-
Profiler PSQL statistics
PLG$PROF_PSQL_STATS_VIEW
-
Profiler aggregated view for PSQL statistics
PLG$PROF_RECORD_SOURCES
-
Profiler information on record sources
PLG$PROF_RECORD_SOURCE_STATS
-
Profiler record source statistics
PLG$PROF_RECORD_SOURCE_STATS_VIEW
-
Profiler aggregated view for record source statistics
PLG$PROF_REQUESTS
-
Profiler information on requests
PLG$PROF_SESSIONS
-
Profiler sessions
PLG$PROF_STATEMENTS
-
Profiler information on statements
PLG$PROF_STATEMENT_STATS_VIEW
-
Profiler aggregated view for statement statistics
PLG$SRP
-
Users and authentication information of the Srp user manager
PLG$USERS
-
User and authentication information of the Legacy_UserManager user manager
PLG$PROF_CURSORS
Profiler information on cursors.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Statement id |
|
|
Cursor id |
|
|
Name of explicit cursor |
|
|
PSQL line number of the cursor |
|
|
PSQL column number of the cursor |
PLG$PROF_PSQL_STATS
Profiler PSQL statistics.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Statement id |
|
|
Request id |
|
|
PSQL line number of the statement |
|
|
PSQL column number of the statement |
|
|
Number of executed times of the line/column |
|
|
Minimal elapsed time (in nanoseconds) of a line/column execution |
|
|
Maximum elapsed time (in nanoseconds) of a line/column execution |
|
|
Accumulated elapsed time (in nanoseconds) of the line/column executions |
PLG$PROF_PSQL_STATS_VIEW
Profiler aggregated view for PSQL statistics.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Statement id |
|
|
Statement type: |
|
|
Package name |
|
|
Routine name |
|
|
Parent statement id |
|
|
Statement type: |
|
|
Parent routine name |
|
|
SQL text (if statement type is |
|
|
PSQL line number of the statement |
|
|
PSQL column number of the statement |
|
|
Number of executed times of the line/column |
|
|
Minimal elapsed time (in nanoseconds) of a line/column execution |
|
|
Maximum elapsed time (in nanoseconds) of a line/column execution |
|
|
Accumulated elapsed time (in nanoseconds) of the line/column executions |
|
|
Average elapsed time (in nanoseconds) of the line/column executions |
PLG$PROF_RECORD_SOURCES
Profiler information on record sources.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Statement id |
|
|
Cursor id |
|
|
Record source id |
|
|
Parent record source id |
|
|
Indentation level for the record source |
|
|
Access path of the record source |
PLG$PROF_RECORD_SOURCE_STATS
Profiler record sources statistics.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Statement id |
|
|
Request id |
|
|
Cursor id |
|
|
Record source id |
|
|
Number of times the record source was opened |
|
|
Minimal elapsed time (in nanoseconds) of a record source open |
|
|
Maximum elapsed time (in nanoseconds) of a record source open |
|
|
Accumulated elapsed time (in nanoseconds) of record source opens |
|
|
Number of fetches from the record source |
|
|
Minimal elapsed time (in nanoseconds) of a record source fetch |
|
|
Maximum elapsed time (in nanoseconds) of a record source fetch |
|
|
Accumulated elapsed time (in nanoseconds) of record source fetches |
PLG$PROF_RECORD_SOURCE_STATS_VIEW
Profiler aggregated view for record source statistics.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Statement id |
|
|
Statement type: |
|
|
Package name |
|
|
Routine name |
|
|
Parent statement id |
|
|
Statement type: |
|
|
Parent routine name |
|
|
SQL text (if statement type is |
|
|
Cursor id |
|
|
Name of explicit cursor |
|
|
PSQL line number of the cursor |
|
|
PSQL column number of the cursor |
|
|
Record source id |
|
|
Parent record source id |
|
|
Indentation level for the record source |
|
|
Access path of the record source |
|
|
Number of times the record source was opened |
|
|
Minimal elapsed time (in nanoseconds) of a record source open |
|
|
Maximum elapsed time (in nanoseconds) of a record source open |
|
|
Accumulated elapsed time (in nanoseconds) of record source opens |
|
|
Average elapsed time (in nanoseconds) of record source opens |
|
|
Number of fetches from the record source |
|
|
Minimal elapsed time (in nanoseconds) of a record source fetch |
|
|
Maximum elapsed time (in nanoseconds) of a record source fetch |
|
|
Accumulated elapsed time (in nanoseconds) of record source fetches |
|
|
Average elapsed time (in nanoseconds) of record source fetches |
|
|
Total elapsed time (in nanoseconds) or record source opens and fetches |
PLG$PROF_REQUESTS
Profiler information on requests.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Statement id |
|
|
Request id |
|
|
Caller statement id |
|
|
Caller request id |
|
|
Instant when request started |
|
|
Instant when request finished |
|
|
Accumulated elapsed time (in nanoseconds) of the request |
PLG$PROF_SESSIONS
Profiler sessions.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Attachment id |
|
|
User which started the profile session |
|
|
Description of the profile session (parameter of |
|
|
Instant when session started |
|
|
Instant when session finished |
PLG$PROF_STATEMENTS
Profiler information on statements.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Statement id |
|
|
Parent statement id |
|
|
Statement type: |
|
|
Package name |
|
|
Routine name |
|
|
SQL text (if statement type is |
PLG$PROF_STATEMENT_STATS_VIEW
Profiler aggregated view for statement statistics.
Column Name | Data Type | Description |
---|---|---|
|
|
Profile session id |
|
|
Statement id |
|
|
Statement type: |
|
|
Package name |
|
|
Routine name |
|
|
Parent statement id |
|
|
Parent statement type: |
|
|
Parent routine name |
|
|
SQL text (if statement type is |
|
|
Number of executed times of the line/column |
|
|
Minimal elapsed time (in nanoseconds) of a statement execution |
|
|
Maximum elapsed time (in nanoseconds) of a statement execution |
|
|
Accumulated elapsed time (in nanoseconds) of statement executions |
|
|
Average elapsed time (in nanoseconds) of statement executions |
PLG$SRP
User and authentication information of the Srp user manager, used for authentication by the Srp family of authentication plugins.
Column Name | Data Type | Description |
---|---|---|
|
|
Username |
|
|
SRP verifier[2]
2. See http://srp.stanford.edu/design.html for details
|
|
|
User-specific salt |
|
|
Comment text |
|
|
Firstname |
|
|
Middle name |
|
|
Lastname |
|
|
User attributes (a.k.a. tags) |
|
|
Active or inactive user |
PLG$USERS
User and authentication information of the Legacy_UserManager user manager, used for authentication by the Legacy_Auth authentication plugins.
Column Name | Data Type | Description |
---|---|---|
|
|
Username |
|
|
Group name |
|
|
User id |
|
|
Group id |
|
|
Password hash |
|
|
Comment text |
|
|
Firstname |
|
|
Middle name |
|
|
Lastname |
Appendix H: Character Sets and Collations
Character Set | ID | Bytes per Char | Collation | Language |
---|---|---|---|---|
|
2 |
1 |
|
English |
|
56 |
2 |
|
Chinese, Vietnamese, Korean |
|
68 |
2 |
|
Japanese |
〃 |
〃 |
〃 |
|
Japanese |
|
50 |
1 |
|
Russian |
〃 |
〃 |
〃 |
|
Russian dBase |
〃 |
〃 |
〃 |
|
Russian Paradox |
|
10 |
1 |
|
U.S. English |
〃 |
〃 |
〃 |
|
German dBase |
〃 |
〃 |
〃 |
|
Spanish dBase |
〃 |
〃 |
〃 |
|
Finnish dBase |
〃 |
〃 |
〃 |
|
French dBase |
〃 |
〃 |
〃 |
|
Italian dBase |
〃 |
〃 |
〃 |
|
Dutch dBase |
〃 |
〃 |
〃 |
|
Swedish dBase |
〃 |
〃 |
〃 |
|
English (Great Britain) dBase |
〃 |
〃 |
〃 |
|
U.S. English dBase |
〃 |
〃 |
〃 |
|
Code page Paradox-ASCII |
〃 |
〃 |
〃 |
|
International English Paradox |
〃 |
〃 |
〃 |
|
Swedish / Finnish Paradox |
|
9 |
1 |
|
Greek |
|
15 |
1 |
|
Baltic |
|
11 |
1 |
|
Latin I (no Euro symbol) |
〃 |
〃 |
〃 |
|
German |
〃 |
〃 |
〃 |
|
Spanish |
〃 |
〃 |
〃 |
|
French |
〃 |
〃 |
〃 |
|
French-Canada |
〃 |
〃 |
〃 |
|
Italian |
〃 |
〃 |
〃 |
|
Dutch |
〃 |
〃 |
〃 |
|
Portuguese-Brazil |
〃 |
〃 |
〃 |
|
Swedish |
〃 |
〃 |
〃 |
|
English-Great Britain |
〃 |
〃 |
〃 |
|
U.S. English |
|
45 |
1 |
|
Latin II |
〃 |
〃 |
〃 |
|
Czech dBase |
〃 |
〃 |
〃 |
|
Polish dBase |
〃 |
〃 |
〃 |
|
Slovene dBase |
〃 |
〃 |
〃 |
|
Czech Paradox |
〃 |
〃 |
〃 |
|
Hungarian Paradox |
〃 |
〃 |
〃 |
|
Polish Paradox |
〃 |
〃 |
〃 |
|
Slovene Paradox |
|
46 |
1 |
|
Turkish |
〃 |
〃 |
〃 |
|
Turkish dBase |
|
16 |
1 |
|
Latin I (with Euro symbol) |
|
13 |
1 |
|
Portuguese |
〃 |
〃 |
〃 |
|
Portuguese dBase |
|
47 |
1 |
|
Icelandic |
〃 |
〃 |
〃 |
|
Icelandic Paradox |
|
17 |
1 |
|
Hebrew |
|
14 |
1 |
|
French-Canada |
〃 |
〃 |
〃 |
|
French dBase-Canada |
|
18 |
1 |
|
Arabic |
|
12 |
1 |
|
Scandinavian |
〃 |
〃 |
〃 |
|
Danish dBase |
〃 |
〃 |
〃 |
|
Norwegian dBase |
〃 |
〃 |
〃 |
|
Paradox Norway and Denmark |
|
48 |
1 |
|
Russian |
|
49 |
1 |
|
Modern Greek |
|
6 |
2 |
|
Japanese EUC |
|
69 |
4 |
|
Chinese |
〃 |
〃 |
〃 |
|
Chinese |
|
67 |
2 |
|
Chinese |
〃 |
〃 |
〃 |
|
Chinese |
|
57 |
2 |
|
Simplified Chinese (Hong Kong, Korea) |
|
21 |
1 |
|
Latin I |
〃 |
〃 |
〃 |
|
Danish |
〃 |
〃 |
〃 |
|
German |
〃 |
〃 |
〃 |
|
Dutch |
〃 |
〃 |
〃 |
|
English-Great Britain |
〃 |
〃 |
〃 |
|
U.S. English |
〃 |
〃 |
〃 |
|
Spanish |
〃 |
〃 |
〃 |
|
Spanish — case insensitive and + accent-insensitive |
〃 |
〃 |
〃 |
|
Finnish |
〃 |
〃 |
〃 |
|
French-Canada |
〃 |
〃 |
〃 |
|
French-Canada — case insensitive + accent insensitive |
〃 |
〃 |
〃 |
|
French |
〃 |
〃 |
〃 |
|
French — case insensitive + accent insensitive |
〃 |
〃 |
〃 |
|
Icelandic |
〃 |
〃 |
〃 |
|
Italian |
〃 |
〃 |
〃 |
|
Norwegian |
〃 |
〃 |
〃 |
|
Portuguese-Brazil |
〃 |
〃 |
〃 |
|
Portuguese |
〃 |
〃 |
〃 |
|
Swedish |
|
22 |
1 |
|
Latin 2 — Central Europe (Croatian, Czech, Hungarian, Polish, Romanian, Serbian, Slovak, Slovenian) |
〃 |
〃 |
〃 |
|
Czech |
〃 |
〃 |
〃 |
|
Hungarian — case insensitive, accent sensitive |
〃 |
〃 |
〃 |
|
Polish |
|
23 |
1 |
|
Latin 3 — Southern Europe (Malta, Esperanto) |
|
34 |
1 |
|
Latin 4 — Northern Europe (Estonian, Latvian, Lithuanian, Greenlandic, Lappish) |
|
35 |
1 |
|
Cyrillic (Russian) |
|
36 |
1 |
|
Arabic |
|
37 |
1 |
|
Greek |
|
38 |
1 |
|
Hebrew |
|
39 |
1 |
|
Latin 5 |
|
40 |
1 |
|
Latin 7 — Baltic |
〃 |
〃 |
〃 |
|
Lithuanian |
|
63 |
1 |
|
Russian — dictionary ordering |
〃 |
〃 |
〃 |
|
Russian |
|
64 |
1 |
|
Ukrainian — dictionary ordering |
〃 |
〃 |
〃 |
|
Ukrainian |
|
44 |
2 |
|
Korean |
〃 |
〃 |
〃 |
|
Korean — dictionary sort order |
|
19 |
1 |
|
Coding NeXTSTEP |
〃 |
〃 |
〃 |
|
German |
〃 |
〃 |
〃 |
|
Spanish |
〃 |
〃 |
〃 |
|
French |
〃 |
〃 |
〃 |
|
Italian |
〃 |
19 |
1 |
|
U.S. English |
|
0 |
1 |
|
Neutral code page. Translation to upper case is performed only for code ASCII 97-122. Recommendation: avoid this character set |
|
1 |
1 |
|
Binary character encoding |
|
5 |
2 |
|
Japanese |
|
66 |
1 |
|
Thai |
〃 |
〃 |
〃 |
|
Thai |
|
3 |
3 |
|
All English |
|
4 |
4 |
|
Any language that is supported in Unicode 4.0 |
〃 |
〃 |
〃 |
|
Any language that is supported in Unicode 4.0 |
〃 |
〃 |
〃 |
|
Any language that is supported in Unicode 4.0 |
〃 |
〃 |
〃 |
|
Any language that is supported in Unicode 4.0 — Case insensitive |
〃 |
〃 |
〃 |
|
Any language that is supported in Unicode 4.0 — Case insensitive and accent insensitive |
|
51 |
1 |
|
ANSI — Central Europe |
〃 |
〃 |
〃 |
|
Bosnian |
〃 |
〃 |
〃 |
|
Czech |
〃 |
〃 |
〃 |
|
Hungarian — case insensitive, accent sensitive |
〃 |
〃 |
〃 |
|
Hungarian — dictionary ordering |
〃 |
〃 |
〃 |
|
Polish |
〃 |
〃 |
〃 |
|
Slovenian |
〃 |
〃 |
〃 |
|
Czech |
〃 |
〃 |
〃 |
|
Czech — Case insensitive and accent insensitive |
|
52 |
1 |
|
ANSI Cyrillic |
〃 |
〃 |
〃 |
|
Paradox Cyrillic (Russian) |
〃 |
〃 |
〃 |
|
Ukrainian |
|
53 |
1 |
|
ANSI — Latin I |
〃 |
〃 |
〃 |
|
English International |
〃 |
〃 |
〃 |
|
Paradox multilingual Latin I |
〃 |
〃 |
〃 |
|
Norwegian and Danish |
〃 |
〃 |
〃 |
|
Paradox Spanish |
〃 |
〃 |
〃 |
|
Swedish and Finnish |
〃 |
〃 |
〃 |
|
Portuguese — Brazil |
|
54 |
1 |
|
ANSI Greek |
〃 |
〃 |
〃 |
|
Paradox Greek |
|
55 |
1 |
|
ANSI Turkish |
〃 |
〃 |
〃 |
|
Paradox Turkish |
|
58 |
1 |
|
ANSI Hebrew |
|
59 |
1 |
|
ANSI Arabic |
|
60 |
1 |
|
ANSI Baltic |
〃 |
〃 |
〃 |
|
Estonian — Dictionary ordering |
〃 |
〃 |
〃 |
|
Lithuanian — Dictionary ordering |
〃 |
〃 |
〃 |
|
Latvian — Dictionary ordering |
|
65 |
1 |
|
|
Appendix I: License notice
The contents of this Documentation are subject to the Public Documentation License Version 1.0 (the “License”); you may only use this Documentation if you comply with the terms of this License. Copies of the License are available at https://www.firebirdsql.org/pdfmanual/pdl.pdf (PDF) and https://www.firebirdsql.org/manual/pdl.html (HTML).
The Original Documentation is titled Firebird 5.0 Language Reference. This Documentation was derived from Firebird 4.0 Language Reference.
The Initial Writers of the Original Documentation are: Paul Vinkenoog, Dmitry Yemanov, Thomas Woinke and Mark Rotteveel. Writers of text originally in Russian are Denis Simonov, Dmitry Filippov, Alexander Karpeykin, Alexey Kovyazin and Dmitry Kuzmenko.
Copyright © 2008-2024. All Rights Reserved. Initial Writers contact: paul at vinkenoog dot nl.
Writers and Editors of included PDL-licensed material are: J. Beesley, Helen Borrie, Arno Brinkman, Frank Ingermann, Vlad Khorsun, Alex Peshkov, Nickolay Samofatov, Adriano dos Santos Fernandes, Dmitry Yemanov.
Included portions are Copyright © 2001-2024 by their respective authors. All Rights Reserved.
Contributor(s): Mark Rotteveel.
Portions created by Mark Rotteveel are Copyright © 2018-2024. All Rights Reserved. (Contributor contact(s): mrotteveel at users dot sourceforge dot net).
Appendix J: Document History
The exact file history is recorded in our git repository; see https://github.com/FirebirdSQL/firebird-documentation
Revision History | |||
---|---|---|---|
1.0 |
02 Oct 2024 |
MR |
|
0.9 |
20 Apr 2024 |
MR |
Corrected description of |
0.8 |
2 Apr 2024 |
MR |
Protocol names are lowercase (#205) |
0.7 |
17 Jan 2024 |
MR |
|
0.6 |
15 Dec 2023 |
MR |
|
0.5 |
29 Sep 2023 |
MR |
|
0.4 |
20 Jun 2023 |
MR |
|
0.3 |
26 May 2023 |
MR |
|
0.2 |
10 May 2023 |
MR |
|
0.1 |
05 May 2023 |
MR |
Copied the Firebird 4.0 Language Reference as a starting point:
|