RE2C(1) | RE2C(1) |
re2c - compile regular expressions to code
re2c [OPTIONS] INPUT [-o OUTPUT]
re2go [OPTIONS] INPUT [-o OUTPUT]
re2c is a tool for generating fast lexical analyzers for C, C++ and Go.
A re2c program consists of normal code intermixed with re2c blocks and directives. Each re2c block may contain definitions, configurations and rules. Definitions are of the form name = regexp; where name is an identifier that consists of letters, digits and underscores, and regexp is a regular expression. Regular expressions may contain other definitions, but recursion is not allowed and each name should be defined before used. Configurations are of the form re2c:config = value; where config is the configuration descriptor and value can be a number, a string or a special word. Rules consist of a regular expression followed by a semantic action (a block of code enclosed in curly braces { and }, or a raw one line of code preceded with := and ended with a newline that is not followed by a whitespace). If the input matches the regular expression, the associated semantic action is executed. If multiple rules match, the longest match takes precedence. If multiple rules match the same string, the earlier rule takes precedence. There are two special rules: default rule * and EOF rule $. Default rule should always be defined, it has the lowest priority regardless of its place and matches any code unit (not necessarily a valid character, see encoding support). EOF rule matches the end of input, it should be defined if the corresponding EOF handling method is used. If start conditions are used, rules have more complex syntax. All rules of a single block are compiled into a deterministic finite-state automaton (DFA) and encoded in the form of a program in the target language. The generated code interfaces with the outer program by the means of a few user-defined primitives (see the program interface section). Reusable blocks allow sharing rules, definitions and configurations between different blocks.
// re2c $INPUT -o $OUTPUT -i #include <assert.h> //
// C/C++ code int lex(const char *YYCURSOR) // {
/*!re2c // start of re2c block
re2c:define:YYCTYPE = char; // configuration
re2c:yyfill:enable = 0; // configuration
re2c:flags:case-ranges = 1; // configuration
//
ident = [a-zA-Z_][a-zA-Z_0-9]*; // named definition
//
ident { return 0; } // normal rule
* { return 1; } // default rule
*/ } //
// int main() // { // C/C++ code
assert(lex("_Zer0") == 0); //
return 0; // } //
/* Generated by re2c */ // re2c $INPUT -o $OUTPUT -i #include <assert.h> //
// C/C++ code int lex(const char *YYCURSOR) // {
{
char yych;
yych = *YYCURSOR;
switch (yych) {
case 'A' ... 'Z':
case '_':
case 'a' ... 'z': goto yy4;
default: goto yy2;
} yy2:
++YYCURSOR;
{ return 1; } yy4:
yych = *++YYCURSOR;
switch (yych) {
case '0' ... '9':
case 'A' ... 'Z':
case '_':
case 'a' ... 'z': goto yy4;
default: goto yy6;
} yy6:
{ return 0; } } } //
// int main() // { // C/C++ code
assert(lex("_Zer0") == 0); //
return 0; // } //
Re2c has a flexible interface that gives the user both the freedom and the responsibility to define how the generated code interacts with the outer program. There are two major options:
Generic API has two styles:
#define YYPEEK() *YYCURSOR #define YYSKIP() ++YYCURSOR #define YYBACKUP() YYMARKER = YYCURSOR #define YYBACKUPCTX() YYCTXMARKER = YYCURSOR #define YYRESTORE() YYCURSOR = YYMARKER #define YYRESTORECTX() YYCURSOR = YYCTXMARKER #define YYRESTORETAG(tag) YYCURSOR = tag #define YYLESSTHAN(len) YYLIMIT - YYCURSOR < len #define YYSTAGP(tag) tag = YYCURSOR #define YYSTAGN(tag) tag = NULL #define YYSHIFT(shift) YYCURSOR += shift #define YYSHIFTSTAG(tag, shift) tag += shift
re2c:define:YYPEEK = "*YYCURSOR"; re2c:define:YYSKIP = "++YYCURSOR"; re2c:define:YYBACKUP = "YYMARKER = YYCURSOR"; re2c:define:YYBACKUPCTX = "YYCTXMARKER = YYCURSOR"; re2c:define:YYRESTORE = "YYCURSOR = YYMARKER"; re2c:define:YYRESTORECTX = "YYCURSOR = YYCTXMARKER"; re2c:define:YYRESTORETAG = "YYCURSOR = ${tag}"; re2c:define:YYLESSTHAN = "YYLIMIT - YYCURSOR < @@{len}"; re2c:define:YYSTAGP = "@@{tag} = YYCURSOR"; re2c:define:YYSTAGN = "@@{tag} = NULL"; re2c:define:YYSHIFT = "YYCURSOR += @@{shift}"; re2c:define:YYSHIFTSTAG = "@@{tag} += @@{shift}";
Here is a list of API primitives that may be used by the generated code in order to interface with the outer program. Which primitives are needed depends on multiple factors, including the complexity of regular expressions, input representation, buffering, the use of various features and so on. All the necessary primitives should be defined by the user in the form of macros, functions, variables, free-form pieces of code or any other suitable form. Re2c does not (and cannot) check the definitions, so if anything is missing or defined incorrectly the generated code will not compile.
Below is the list of all directives provided by re2c (in no particular order). More information on each directive can be found in the related sections.
re2c uses the following syntax for regular expressions:
Character classes and string literals may contain the following escape sequences: \a, \b, \f, \n, \r, \t, \v, \\, octal escapes \ooo and hexadecimal escapes \xhh, \uhhhh and \Uhhhhhhhh.
Re2c provides a number of ways to handle end-of-input situation. Which way to use depends on the complexity of regular expressions, performance considerations, the need for input buffering and various other factors. EOF handling is probably the most complex part of re2c user interface --- it definitely requires a bit of understanding of how the generated lexer works. But in return is allows the user to customize lexer for a particular environment and avoid the unnecessary overhead of generic methods when a simpler method is sufficient. Roughly speaking, there are four main methods:
This is the simplest and the most efficient method. It is applicable in cases when the input is small enough to fit into a continuous memory buffer and there is a natural "sentinel" symbol --- a code unit that is not allowed by any of the regular expressions in grammar (except possibly as a terminating character). Sentinel symbol never appears in well-formed input, therefore it can be appended at the end of input and used as a stop signal by the lexer. A good example of such input is a null-terminated C-string, provided that the grammar does not allow NULL in the middle of lexemes. Sentinel method is very efficient, because the lexer does not need to perform any additional checks for the end of input --- it comes naturally as a part of processing the next character. It is very important that the sentinel symbol is not allowed in the middle of the rule --- otherwise on some inputs the lexer may read past the end of buffer and crash or cause memory corruption. Re2c verifies this automatically. Use re2c:sentinel configuration to specify which sentinel symbol is used.
Below is an example of using sentinel method. Configuration re2c:yyfill:enable = 0; suppresses generation of end-of-input checks and YYFILL calls.
// re2c $INPUT -o $OUTPUT #include <assert.h> // expect a null-terminated string static int lex(const char *YYCURSOR) {
int count = 0; loop:
/*!re2c
re2c:define:YYCTYPE = char;
re2c:yyfill:enable = 0;
* { return -1; }
[\x00] { return count; }
[a-z]+ { ++count; goto loop; }
[ ]+ { goto loop; }
*/ } int main() {
assert(lex("") == 0);
assert(lex("one two three") == 3);
assert(lex("f0ur") == -1);
return 0; }
Bounds checking is a generic method: it can be used with any input grammar. The basic idea is simple: we need to check for the end of input before reading the next input character. However, if implemented in a straightforward way, this would be quite inefficient: checking on each input character would cause a major slowdown. Re2c avoids slowdown by generating checks only in certain key states of the lexer, and letting it run without checks in-between the key states. More precisely, re2c computes strongly connected components (SCCs) of the underlying DFA (which roughly correspond to loops), and generates only a few checks per each SCC (usually just one, but in general enough to make the SCC acyclic). The check is of the form (YYLIMIT - YYCURSOR) < n, where n is the maximal length of a simple path in the corresponding SCC. If this condiiton is true, the lexer calls YYFILL(n), which must either supply at least n input characters, or do not return. When the lexer continues after the check, it is certain that the next n characters can be read safely without checks.
This approach reduces the number of checks significantly (and makes the lexer much faster as a result), but it has a downside. Since the lexer checks for multiple characters at once, it may end up in a situation when there are a few remaining input characters (less than n) corresponding to a short path in the SCC, but the lexer cannot proceed because of the check, and YYFILL cannot supply more character because it is the end of input. To solve this problem, re2c requires that additional padding consisting of fake characters is appended at the end of input. The length of padding should be YYMAXFILL, which equals to the maximum n parameter to YYFILL and must be generated by re2c using /*!max:re2c*/ directive. The fake characters should not form a valid lexeme suffix, otherwise the lexer may be fooled into matching a fake lexeme. Usually it's a good idea to use NULL characters for padding.
Below is an example of using bounds checking with padding. Note that the grammar rule for single-quoted strings allows arbitrary symbols in the middle of lexeme, so there is no natural sentinel in the grammar. Strings like "aha\0ha" are perfectly valid, but ill-formed strings like "aha\0 are also possible and shouldn’t crash the lexer. In this example we do not use buffer refilling, therefore YYFILL definition simply returns an error. Note that YYFILL will only be called after the lexer reaches padding, because only then will the check condition be satisfied.
// re2c $INPUT -o $OUTPUT #include <assert.h> #include <stdlib.h> #include <string.h> /*!max:re2c*/ // expect YYMAXFILL-padded string static int lex(const char *str, unsigned int len) {
const char *YYCURSOR = str, *YYLIMIT = str + len + YYMAXFILL;
int count = 0; loop:
/*!re2c
re2c:api:style = free-form;
re2c:define:YYCTYPE = char;
re2c:define:YYFILL = "return -1;";
* { return -1; }
[\x00] { return YYCURSOR == YYLIMIT ? count : -1; }
['] ([^'\\] | [\\][^])* ['] { ++count; goto loop; }
[ ]+ { goto loop; }
*/ } // make a copy of the string with YYMAXFILL zeroes at the end static void test(const char *str, unsigned int len, int res) {
char *s = (char*) malloc(len + YYMAXFILL);
memcpy(s, str, len);
memset(s + len, 0, YYMAXFILL);
int r = lex(s, len);
free(s);
assert(r == res); } #define TEST(s, r) test(s, sizeof(s) - 1, r) int main() {
TEST("", 0);
TEST("'qu\0tes' 'are' 'fine: \\'' ", 3);
TEST("'unterminated\\'", -1);
return 0; }
EOF rule $ was introduced in version 1.2. It is a hybrid approach that tries to take the best of both worlds: simplicity and efficiency of the sentinel method combined with the generality of bounds-checking method. The idea is to appoint an arbitrary symbol to be the sentinel, and only perform further bounds checking if the sentinel symbol matches (more precisely, if the symbol class that contains it matches). The check is of the form YYLIMIT <= YYCURSOR. If this condition is not satisfied, then the sentinel is just an ordinary input character and the lexer continues. Otherwise this is a real sentinel, and the lexer calls YYFILL(). If YYFILL returns zero, the lexer assumes that it has more input and tries to re-match. Otherwise YYFILL returns non-zero and the lexer knows that it has reached the end of input. At this point there are three possibilities. First, it might have already matched a shorter lexeme --- in this case it just rolls back to the last accepting state. Second, it might have consumed some characters, but failed to match --- in this case it falls back to default rule *. Finally, it might be in the initial state --- in this (and only this!) case it matches EOF rule $.
Below is an example of using EOF rule. Configuration re2c:yyfill:enable = 0; suppresses generation of YYFILL calls (but not the bounds checks).
// re2c $INPUT -o $OUTPUT #include <assert.h> // expect a null-terminated string static int lex(const char *str, unsigned int len) {
const char *YYCURSOR = str, *YYLIMIT = str + len, *YYMARKER;
int count = 0; loop:
/*!re2c
re2c:define:YYCTYPE = char;
re2c:yyfill:enable = 0;
re2c:eof = 0;
* { return -1; }
$ { return count; }
['] ([^'\\] | [\\][^])* ['] { ++count; goto loop; }
[ ]+ { goto loop; }
*/ } #define TEST(s, r) assert(lex(s, sizeof(s) - 1) == r) int main() {
TEST("", 0);
TEST("'qu\0tes' 'are' 'fine: \\'' ", 3);
TEST("'unterminated\\'", -1);
return 0; }
Generic API can be used with any of the above methods. It also allows one to use a user-defined method by placing EOF checks in one of the basic primitives. Usually this is either YYSKIP (the check is performed when advancing to the next input character), or YYPEEK (the check is performed when reading the next input character). The resulting methods are inefficient, as they check on each input character. However, they can be useful in cases when the input cannot be buffered or padded and does not contain a sentinel character at the end. One should be cautious when using such ad-hoc methods, as it is easy to overlook some corner cases and come up with a method that only partially works. Also it should be noted that not everything can be expressed via generic API: for example, it is impossible to reimplement the way EOF rule works (in particular, it is impossible to re-match the character after successful YYFILL).
Below is an example of using YYSKIP to perform bounds checking without padding. YYFILL generation is suppressed using re2c:yyfill:enable = 0; configuration. Note that if the grammar was more complex, this method might not work in case when two rules overlap and EOF check fails after a shorter lexeme has already been matched (as it happens in our example, there are no overlapping rules).
// re2c $INPUT -o $OUTPUT #include <assert.h> #include <stdlib.h> #include <string.h> // expect a string without terminating null static int lex(const char *str, unsigned int len) {
const char *cur = str, *lim = str + len, *mar;
int count = 0; loop:
/*!re2c
re2c:yyfill:enable = 0;
re2c:eof = 0;
re2c:flags:input = custom;
re2c:api:style = free-form;
re2c:define:YYCTYPE = char;
re2c:define:YYLESSTHAN = "cur >= lim";
re2c:define:YYPEEK = "cur < lim ? *cur : 0"; // fake null
re2c:define:YYSKIP = "++cur;";
re2c:define:YYBACKUP = "mar = cur;";
re2c:define:YYRESTORE = "cur = mar;";
* { return -1; }
$ { return count; }
['] ([^'\\] | [\\][^])* ['] { ++count; goto loop; }
[ ]+ { goto loop; }
*/ } // make a copy of the string without terminating null static void test(const char *str, unsigned int len, int res) {
char *s = (char*) malloc(len);
memcpy(s, str, len);
int r = lex(s, len);
free(s);
assert(r == res); } #define TEST(s, r) test(s, sizeof(s) - 1, r) int main() {
TEST("", 0);
TEST("'qu\0tes' 'are' 'fine: \\'' ", 3);
TEST("'unterminated\\'", -1);
return 0; }
The need for buffering arises when the input cannot be mapped in memory all at once: either it is too large, or it comes in a streaming fashion (like reading from a socket). The usual technique in such cases is to allocate a fixed-sized memory buffer and process input in chunks that fit into the buffer. When the current chunk is processed, it is moved out and new data is moved in. In practice it is somewhat more complex, because lexer state consists not of a single input position, but a set of interrelated posiitons:
Not all these are used in every case, but if used, they must be updated by YYFILL. All active positions are contained in the segment between token and cursor, therefore everything between buffer start and token can be discarded, the segment from token and up to limit should be moved to the beginning of buffer, and the free space at the end of buffer should be filled with new data. In order to avoid frequent YYFILL calls it is best to fill in as many input characters as possible (even though fewer characters might suffice to resume the lexer). The details of YYFILL implementation are slightly different depending on which EOF handling method is used: the case of EOF rule is somewhat simpler than the case of bounds-checking with padding. Also note that if -f --storable-state option is used, YYFILL has slightly different semantics (desrbed in the section about storable state).
If EOF rule is used, YYFILL is a function-like primitive that accepts no arguments and returns a value which is checked against zero. YYFILL invocation is triggered by condition YYLIMIT <= YYCURSOR in default API and YYLESSTHAN() in generic API. A non-zero return value means that YYFILL has failed. A successful YYFILL call must supply at least one character and adjust input positions accordingly. Limit must always be set to one after the last input position in buffer, and the character at the limit position must be the sentinel symbol specified by re2c:eof configuration. The pictures below show the relative locations of input positions in buffer before and after YYFILL call (sentinel symbol is marked with #, and the second picture shows the case when there is not enough input to fill the whole buffer).
<-- shift -->
>-A------------B---------C-------------D#-----------E->
buffer token marker limit,
cursor >-A------------B---------C-------------D------------E#->
buffer, marker cursor limit
token
<-- shift -->
>-A------------B---------C-------------D#--E (EOF)
buffer token marker limit,
cursor >-A------------B---------C-------------D---E#........
buffer, marker cursor limit
token
Here is an example of a program that reads input file input.txt in chunks of 4096 bytes and uses EOF rule.
// re2c $INPUT -o $OUTPUT #include <assert.h> #include <stdio.h> #include <string.h> #define SIZE 4096 typedef struct {
FILE *file;
char buf[SIZE + 1], *lim, *cur, *mar, *tok;
int eof; } Input; static int fill(Input *in) {
if (in->eof) {
return 1;
}
const size_t free = in->tok - in->buf;
if (free < 1) {
return 2;
}
memmove(in->buf, in->tok, in->lim - in->tok);
in->lim -= free;
in->cur -= free;
in->mar -= free;
in->tok -= free;
in->lim += fread(in->lim, 1, free, in->file);
in->lim[0] = 0;
in->eof |= in->lim < in->buf + SIZE;
return 0; } static void init(Input *in, FILE *file) {
in->file = file;
in->cur = in->mar = in->tok = in->lim = in->buf + SIZE;
in->eof = 0;
fill(in); } static int lex(Input *in) {
int count = 0; loop:
in->tok = in->cur;
/*!re2c
re2c:eof = 0;
re2c:api:style = free-form;
re2c:define:YYCTYPE = char;
re2c:define:YYCURSOR = in->cur;
re2c:define:YYMARKER = in->mar;
re2c:define:YYLIMIT = in->lim;
re2c:define:YYFILL = "fill(in) == 0";
* { return -1; }
$ { return count; }
['] ([^'\\] | [\\][^])* ['] { ++count; goto loop; }
[ ]+ { goto loop; }
*/ } int main() {
const char *fname = "input";
const char str[] = "'qu\0tes' 'are' 'fine: \\'' ";
FILE *f;
Input in;
// prepare input file: a few times the size of the buffer,
// containing strings with zeroes and escaped quotes
f = fopen(fname, "w");
for (int i = 0; i < SIZE; ++i) {
fwrite(str, 1, sizeof(str) - 1, f);
}
fclose(f);
f = fopen(fname, "r");
init(&in, f);
assert(lex(&in) == SIZE * 3);
fclose(f);
remove(fname);
return 0; }
In the default case (when EOF rule is not used) YYFILL is a function-like primitive that accepts a single argument and does not return any value. YYFILL invocation is triggered by condition (YYLIMIT - YYCURSOR) < n in default API and YYLESSTHAN(n) in generic API. The argument passed to YYFILL is the minimal number of characters that must be supplied. If it fails to do so, YYFILL must not return to the lexer (for that reason it is best implemented as a macro that returns from the calling function on failure). In case of a successful YYFILL invocation the limit position must be set either to one after the last input position in buffer, or to the end of YYMAXFILL padding (in case YYFILL has successfully read at least n characters, but not enough to fill the entire buffer). The pictures below show the relative locations of input positions in buffer before and after YYFILL invocation (YYMAXFILL padding on the second picture is marked with # symbols).
<-- shift --> <-- need -->
>-A------------B---------C-----D-------E---F--------G->
buffer token marker cursor limit >-A------------B---------C-----D-------E---F--------G->
buffer, marker cursor limit
token
<-- shift --> <-- need -->
>-A------------B---------C-----D-------E-F (EOF)
buffer token marker cursor limit >-A------------B---------C-----D-------E-F###############
buffer, marker cursor limit
token <- YYMAXFILL ->
Here is an example of a program that reads input file input.txt in chunks of 4096 bytes and uses bounds-checking with padding.
// re2c $INPUT -o $OUTPUT #include <assert.h> #include <stdio.h> #include <string.h> /*!max:re2c*/ #define SIZE 4096 typedef struct {
FILE *file;
char buf[SIZE + YYMAXFILL], *lim, *cur, *mar, *tok;
int eof; } Input; static int fill(Input *in, size_t need) {
if (in->eof) {
return 1;
}
const size_t free = in->tok - in->buf;
if (free < need) {
return 2;
}
memmove(in->buf, in->tok, in->lim - in->tok);
in->lim -= free;
in->cur -= free;
in->mar -= free;
in->tok -= free;
in->lim += fread(in->lim, 1, free, in->file);
if (in->lim < in->buf + SIZE) {
in->eof = 1;
memset(in->lim, 0, YYMAXFILL);
in->lim += YYMAXFILL;
}
return 0; } static void init(Input *in, FILE *file) {
in->file = file;
in->cur = in->mar = in->tok = in->lim = in->buf + SIZE;
in->eof = 0;
fill(in, 1); } static int lex(Input *in) {
int count = 0; loop:
in->tok = in->cur;
/*!re2c
re2c:api:style = free-form;
re2c:define:YYCTYPE = char;
re2c:define:YYCURSOR = in->cur;
re2c:define:YYMARKER = in->mar;
re2c:define:YYLIMIT = in->lim;
re2c:define:YYFILL = "if (fill(in, @@) != 0) return -1;";
* { return -1; }
[\x00] { return (YYMAXFILL == in->lim - in->tok) ? count : -1; }
['] ([^'\\] | [\\][^])* ['] { ++count; goto loop; }
[ ]+ { goto loop; }
*/ } int main() {
const char *fname = "input";
const char str[] = "'qu\0tes' 'are' 'fine: \\'' ";
FILE *f;
Input in;
// prepare input file: a few times the size of the buffer,
// containing strings with zeroes and escaped quotes
f = fopen(fname, "w");
for (int i = 0; i < SIZE; ++i) {
fwrite(str, 1, sizeof(str) - 1, f);
}
fclose(f);
f = fopen(fname, "r");
init(&in, f);
assert(lex(&in) == SIZE * 3);
fclose(f);
remove(fname);
return 0; }
Re2c allows one to include other files using directive /*!include:re2c FILE */, where FILE is the name of file to be included. Re2c looks for included files in the directory of the including file and in include locations, which can be specified with -I option. Re2c include directive works in the same way as C/C++ #include: the contents of FILE are copy-pasted verbatim in place of the directive. Include files may have further includes of their own. Re2c provides some predefined include files that can be found in the include/ subdirectory of the project. These files contain definitions that can be useful to other projects (such as Unicode categories) and form something like a standard library for re2c. Here is an example:
typedef enum { OK, FAIL } Result; /*!re2c
number = [1-9][0-9]*; */
// re2c $INPUT -o $OUTPUT -i #include <assert.h> /*!include:re2c "definitions.h" */ Result lex(const char *YYCURSOR) {
/*!re2c
re2c:define:YYCTYPE = char;
re2c:yyfill:enable = 0;
number { return OK; }
* { return FAIL; }
*/ } int main() {
assert(lex("123") == OK);
return 0; }
Re2c allows one to generate header file from the input .re file using option -t, --type-header or configuration re2c:flags:type-header and directives /*!header:re2c:on*/ and /*!header:re2c:off*/. The first directive marks the beginning of header file, and the second directive marks the end of it. Everything between these directives is processed by re2c, and the generated code is written to the file specified by the -t --type-header option (or stdout if this option was not used). Autogenerated header file may be needed in cases when re2c is used to generate definitions of constants, variables and structs that must be visible from other translation units.
Here is an example of generating a header file that contains definition of the lexer state with tag variables (the number variables depends on the regular grammar and is unknown to the programmer).
// re2c $INPUT -o $OUTPUT -i --type-header src/lexer/lexer.h #include <assert.h> #include "src/lexer/lexer.h" // generated by re2c /*!header:re2c:on*/ typedef struct {
const char *str, *cur, *mar;
/*!stags:re2c format = "const char *@@{tag}; "; */ } LexerState; /*!header:re2c:off*/ int lex(LexerState *st) {
/*!re2c
re2c:flags:type-header = "src/lexer/lexer.h";
re2c:yyfill:enable = 0;
re2c:flags:tags = 1;
re2c:define:YYCTYPE = char;
re2c:define:YYCURSOR = "st->cur";
re2c:define:YYMARKER = "st->mar";
re2c:tags:expression = "st->@@{tag}";
[x]{1,4} / [x]{3,5} { return 0; } // ambiguous trailing context
* { return 1; }
*/ } int main() {
LexerState st;
st.str = st.cur = "xxxxxxxx";
assert(lex(&st) == 0 && st.cur - st.str == 4);
return 0; }
/* Generated by re2c */ typedef struct {
const char *str, *cur, *mar;
const char *yyt1; const char *yyt2; const char *yyt3; } LexerState;
Re2c has two options for submatch extraction.
The first option is -T --tags. With this option one can use standalone tags of the form @stag and #mtag, where stag and mtag are arbitrary used-defined names. Tags can be used anywhere inside of a regular expression; semantically they are just position markers. Tags of the form @stag are called s-tags: they denote a single submatch value (the last input position where this tag matched). Tags of the form #mtag are called m-tags: they denote multiple submatch values (the whole history of repetitions of this tag). All tags should be defined by the user as variables with the corresponding names. With standalone tags re2c uses leftmost greedy disambiguation: submatch positions correspond to the leftmost matching path through the regular expression.
The second option is -P --posix-captures: it enables POSIX-compliant capturing groups. In this mode parentheses in regular expressions denote the beginning and the end of capturing groups; the whole regular expression is group number zero. The number of groups for the matching rule is stored in a variable yynmatch, and submatch results are stored in yypmatch array. Both yynmatch and yypmatch should be defined by the user, and yypmatch size must be at least [yynmatch * 2]. Re2c provides a directive /*!maxnmatch:re2c*/ that defines YYMAXNMATCH: a constant equal to the maximal value of yynmatch among all rules. Note that re2c implements POSIX-compliant disambiguation: each subexpression matches as long as possible, and subexpressions that start earlier in regular expression have priority over those starting later. Capturing groups are translated into s-tags under the hood, therefore we use the word "tag" to describe them as well.
With both -P --posix-captures and T --tags options re2c uses efficient submatch extraction algorithm described in the Tagged Deterministic Finite Automata with Lookahead paper. The overhead on submatch extraction in the generated lexer grows with the number of tags --- if this number is moderate, the overhead is barely noticeable. In the lexer tags are implemented using a number of tag variables generated by re2c. There is no one-to-one correspondence between tag variables and tags: a single variable may be reused for different tags, and one tag may require multiple variables to hold all its ambiguous values. Eventually ambiguity is resolved, and only one final variable per tag survives. When a rule matches, all its tags are set to the values of the corresponding tag variables. The exact number of tag variables is unknown to the user; this number is determined by re2c. However, tag variables should be defined by the user as a part of the lexer state and updated by YYFILL, therefore re2c provides directives /*!stags:re2c*/ and /*!mtags:re2c*/ that can be used to declare, initialize and manipulate tag variables. These directives have two optional configurations: format = "@@"; (specifies the template where @@ is substituted with the name of each tag variable), and separator = ""; (specifies the piece of code used to join the generated pieces for different tag variables).
S-tags support the following operations:
M-tags support the following operations:
S-tags can be implemented as scalar values (pointers or offsets). M-tags need a more complex representation, as they need to store a sequence of tag values. The most naive and inefficient representation of an m-tag is a list (array, vector) of tag values; a more efficient representation is to store all m-tags in a prefix-tree represented as array of nodes (v, p), where v is tag value and p is a pointer to parent node.
Here is an example of using s-tags to parse an IPv4 address.
// re2c $INPUT -o $OUTPUT #include <assert.h> #include <stdint.h> static uint32_t num(const char *s, const char *e) {
uint32_t n = 0;
for (; s < e; ++s) n = n * 10 + (*s - '0');
return n; } static const uint64_t ERROR = ~0lu; static uint64_t lex(const char *YYCURSOR) {
const char *YYMARKER, *o1, *o2, *o3, *o4;
/*!stags:re2c format = 'const char *@@;'; */
/*!re2c
re2c:yyfill:enable = 0;
re2c:flags:tags = 1;
re2c:define:YYCTYPE = char;
octet = [0-9] | [1-9][0-9] | [1][0-9][0-9] | [2][0-4][0-9] | [2][5][0-5];
dot = [.];
end = [\x00];
@o1 octet dot @o2 octet dot @o3 octet dot @o4 octet end {
return num(o4, YYCURSOR - 1)
+ (num(o3, o4 - 1) << 8)
+ (num(o2, o3 - 1) << 16)
+ (num(o1, o2 - 1) << 24);
}
* { return ERROR; }
*/ } int main() {
assert(lex("1.2.3.4") == 0x01020304);
assert(lex("127.0.0.1") == 0x7f000001);
assert(lex("255.255.255.255") == 0xffffffff);
assert(lex("1.2.3.") == ERROR);
assert(lex("1.2.3.256") == ERROR);
return 0; }
Here is an example of using POSIX capturing groups to parse an IPv4 address.
// re2c $INPUT -o $OUTPUT #include <assert.h> #include <stdint.h> static uint32_t num(const char *s, const char *e) {
uint32_t n = 0;
for (; s < e; ++s) n = n * 10 + (*s - '0');
return n; } /*!maxnmatch:re2c*/ static const uint64_t ERROR = ~0lu; static uint64_t lex(const char *YYCURSOR) {
const char *YYMARKER;
const char *yypmatch[YYMAXNMATCH * 2];
uint32_t yynmatch;
/*!stags:re2c format = 'const char *@@;'; */
/*!re2c
re2c:yyfill:enable = 0;
re2c:flags:posix-captures = 1;
re2c:define:YYCTYPE = char;
octet = [0-9] | [1-9][0-9] | [1][0-9][0-9] | [2][0-4][0-9] | [2][5][0-5];
dot = [.];
end = [\x00];
(octet) dot (octet) dot (octet) dot (octet) end {
assert(yynmatch == 5);
return num(yypmatch[8], yypmatch[9])
+ (num(yypmatch[6], yypmatch[7]) << 8)
+ (num(yypmatch[4], yypmatch[5]) << 16)
+ (num(yypmatch[2], yypmatch[3]) << 24);
}
* { return ERROR; }
*/ } int main() {
assert(lex("1.2.3.4") == 0x01020304);
assert(lex("127.0.0.1") == 0x7f000001);
assert(lex("255.255.255.255") == 0xffffffff);
assert(lex("1.2.3.") == ERROR);
assert(lex("1.2.3.256") == ERROR);
return 0; }
Here is an example of using m-tags to parse a semicolon-separated sequence of words (C++). Tag variables are stored in a tree that is packed in a vector.
// re2c $INPUT -o $OUTPUT #include <assert.h> #include <vector> #include <string> static const int ROOT = -1; struct Mtag {
int pred;
const char *tag; }; typedef std::vector<Mtag> MtagTree; typedef std::vector<std::string> Words; static void mtag(int *pt, const char *t, MtagTree *tree) {
Mtag m = {*pt, t};
*pt = (int)tree->size();
tree->push_back(m); } static void unfold(const MtagTree &tree, int x, int y, Words &words) {
if (x == ROOT) return;
unfold(tree, tree[x].pred, tree[y].pred, words);
const char *px = tree[x].tag, *py = tree[y].tag;
words.push_back(std::string(px, py - px)); } #define YYMTAGP(t) mtag(&t, YYCURSOR, &tree) #define YYMTAGN(t) mtag(&t, NULL, &tree) static bool lex(const char *YYCURSOR, Words &words) {
const char *YYMARKER;
/*!mtags:re2c format = "int @@ = ROOT;"; */
MtagTree tree;
int x, y;
/*!re2c
re2c:yyfill:enable = 0;
re2c:flags:tags = 1;
re2c:define:YYCTYPE = char;
(#x [a-z]+ #y [;])+ {
words.clear();
unfold(tree, x, y, words);
return true;
}
* { return false; }
*/ } int main() {
Words w;
assert(lex("one;two;three;", w) && w == Words({"one", "two", "three"}));
return 0; }
With -f --storable-state option re2c generates a lexer that can store its current state, return to the caller, and later resume operations exactly where it left off. The default mode of operation in re2c is a "pull" model, in which the lexer "pulls" more input whenever it needs it. This may be unacceptable in cases when the input becomes available piece by piece (for example, if the lexer is invoked by the parser, or if the lexer program communicates via a socket protocol with some other program that must wait for a reply from the lexer before it transmits the next message). Storable state feature is intended exactly for such cases: it allows one to generate lexers that work in a "push" model. When the lexer needs more input, it stores its state and returns to the caller. Later, when more input becomes available, the caller resumes the lexer exactly where it stopped. There are a few changes necessary compared to the "pull" model:
Here is an example of a "push"-model lexer that reads input from stdin and expects a sequence of words separated by spaces and newlines. The lexer loops forever, waiting for more input. It can be terminated by sending a special EOF token --- a word "stop", in which case the lexer terminates successfully and prints the number of words it has seen. Abnormal termination happens in case of a syntax error, premature end of input (without the "stop" word) or in case the buffer is too small to hold a lexeme (for example, if one of the words exceeds buffer size). Premature end of input happens in case the lexer fails to read any input while being in the initial state --- this is the only case when EOF rule matches. Note that the lexer may call YYFILL twice before terminating (and thus require hitting Ctrl+D a few times). First time YYFILL is called when the lexer expects continuation of the current greedy lexeme (either a word or a whitespace sequence). If YYFILL fails, the lexer knows that it has reached the end of the current lexeme and executes the corresponding semantic action. The action jumps to the beginning of the loop, the lexer enters the initial state and calls YYFILL once more. If it fails, the lexer matches EOF rule. (Alternatively EOF rule can be used for termination instead of a special EOF lexeme.)
// re2c $INPUT -o $OUTPUT -f #include <assert.h> #include <stdio.h> #include <string.h> #define DEBUG 0 #define LOG(...) if (DEBUG) fprintf(stderr, __VA_ARGS__); #define BUFSIZE 10 typedef struct {
FILE *file;
char buf[BUFSIZE + 1], *lim, *cur, *mar, *tok;
unsigned yyaccept;
int state; } Input; static void init(Input *in, FILE *f) {
in->file = f;
in->cur = in->mar = in->tok = in->lim = in->buf + BUFSIZE;
in->lim[0] = 0; // append sentinel symbol
in->yyaccept = 0;
in->state = -1; } typedef enum {END, READY, WAITING, BAD_PACKET, BIG_PACKET} Status; static Status fill(Input *in) {
const size_t shift = in->tok - in->buf;
const size_t free = BUFSIZE - (in->lim - in->tok);
if (free < 1) return BIG_PACKET;
memmove(in->buf, in->tok, BUFSIZE - shift);
in->lim -= shift;
in->cur -= shift;
in->mar -= shift;
in->tok -= shift;
const size_t read = fread(in->lim, 1, free, in->file);
in->lim += read;
in->lim[0] = 0; // append sentinel symbol
return READY; } static Status lex(Input *in, unsigned int *recv) {
char yych;
/*!getstate:re2c*/ loop:
in->tok = in->cur;
/*!re2c
re2c:eof = 0;
re2c:api:style = free-form;
re2c:define:YYCTYPE = "char";
re2c:define:YYCURSOR = "in->cur";
re2c:define:YYMARKER = "in->mar";
re2c:define:YYLIMIT = "in->lim";
re2c:define:YYGETSTATE = "in->state";
re2c:define:YYSETSTATE = "in->state = @@;";
re2c:define:YYFILL = "return WAITING;";
packet = [a-z]+[;];
* { return BAD_PACKET; }
$ { return END; }
packet { *recv = *recv + 1; goto loop; }
*/ } void test(const char **packets, Status status) {
const char *fname = "pipe";
FILE *fw = fopen(fname, "w");
FILE *fr = fopen(fname, "r");
setvbuf(fw, NULL, _IONBF, 0);
setvbuf(fr, NULL, _IONBF, 0);
Input in;
init(&in, fr);
Status st;
unsigned int send = 0, recv = 0;
for (;;) {
st = lex(&in, &recv);
if (st == END) {
LOG("done: got %u packets\n", recv);
break;
} else if (st == WAITING) {
LOG("waiting...\n");
if (*packets) {
LOG("sent packet %u\n", send);
fprintf(fw, "%s", *packets++);
++send;
}
st = fill(&in);
LOG("queue: '%s'\n", in.buf);
if (st == BIG_PACKET) {
LOG("error: packet too big\n");
break;
}
assert(st == READY);
} else {
assert(st == BAD_PACKET);
LOG("error: ill-formed packet\n");
break;
}
}
LOG("\n");
assert(st == status);
if (st == END) assert(recv == send);
fclose(fw);
fclose(fr);
remove(fname); } int main() {
const char *packets1[] = {0};
const char *packets2[] = {"zero;", "one;", "two;", "three;", "four;", 0};
const char *packets3[] = {"zer0;", 0};
const char *packets4[] = {"goooooooooogle;", 0};
test(packets1, END);
test(packets2, END);
test(packets3, BAD_PACKET);
test(packets4, BIG_PACKET);
return 0; }
Reuse mode is enabled with the -r --reusable option. In this mode re2c allows one to reuse definitions, configurations and rules specified by a /*!rules:re2c*/ block in subsequent /*!use:re2c*/ blocks. As of re2c-1.2 it is possible to mix such blocks with normal /*!re2c*/ blocks; prior to that re2c expects a single rules-block followed by use-blocks (normal blocks are disallowed). Use-blocks can have additional definitions, configurations and rules: they are merged to those specified by the rules-block. A very common use case for -r --reusable option is a lexer that supports multiple input encodings: lexer rules are defined once and reused multiple times with encoding-specific configurations, such as re2c:flags:utf-8.
Below is an example of a multi-encoding lexer: it reads a phrase with Unicode math symbols and accepts input either in UTF8 or in UT32. Note that the --input-encoding utf8 option allows us to write UTF8-encoded symbols in the regular expressions; without this option re2c would parse them as a plain ASCII byte sequnce (and we would have to use hexadecimal escape sequences).
// re2c $INPUT -o $OUTPUT -r --input-encoding utf8 #include <assert.h> #include <stdint.h> /*!rules:re2c
re2c:yyfill:enable = 0;
"∀x ∃y: p(x, y)" { return 0; }
* { return 1; } */ static int lex_utf8(const uint8_t *YYCURSOR) {
const uint8_t *YYMARKER;
/*!use:re2c
re2c:define:YYCTYPE = uint8_t;
re2c:flags:8 = 1;
*/ } static int lex_utf32(const uint32_t *YYCURSOR) {
const uint32_t *YYMARKER;
/*!use:re2c
re2c:define:YYCTYPE = uint32_t;
re2c:flags:8 = 0;
re2c:flags:u = 1;
*/ } int main() {
static const uint8_t s8[] = // UTF-8
{ 0xe2, 0x88, 0x80, 0x78, 0x20, 0xe2, 0x88, 0x83, 0x79
, 0x3a, 0x20, 0x70, 0x28, 0x78, 0x2c, 0x20, 0x79, 0x29 };
static const uint32_t s32[] = // UTF32
{ 0x00002200, 0x00000078, 0x00000020, 0x00002203
, 0x00000079, 0x0000003a, 0x00000020, 0x00000070
, 0x00000028, 0x00000078, 0x0000002c, 0x00000020
, 0x00000079, 0x00000029 };
assert(lex_utf8(s8) == 0);
assert(lex_utf32(s32) == 0);
return 0; }
re2c supports the following encodings: ASCII (default), EBCDIC (-e), UCS-2 (-w), UTF-16 (-x), UTF-32 (-u) and UTF-8 (-8). See also inplace configuration re2c:flags.
The following concepts should be clarified when talking about encodings. A code point is an abstract number that represents a single symbol. A code unit is the smallest unit of memory, which is used in the encoded text (it corresponds to one character in the input stream). One or more code units may be needed to represent a single code point, depending on the encoding. In a fixed-length encoding, each code point is represented with an equal number of code units. In variable-length encodings, different code points can be represented with different number of code units.
In Unicode, values from range 0xD800 to 0xDFFF (surrogates) are not valid Unicode code points. Any encoded sequence of code units that would map to Unicode code points in the range 0xD800-0xDFFF, is ill-formed. The user can control how re2c treats such ill-formed sequences with the --encoding-policy <policy> switch.
For some encodings, there are code units that never occur in a valid encoded stream (e.g., 0xFF byte in UTF-8). If the generated scanner must check for invalid input, the only correct way to do so is to use the default rule (*). Note that the full range rule ([^]) won't catch invalid code units when a variable-length encoding is used ([^] means "any valid code point", whereas the default rule (*) means "any possible code unit").
Conditions are enabled with -c --conditions. This option allows one to encode multiple interrelated lexers within the same re2c block.
Each lexer corresponds to a single condition. It starts with a label of the form yyc_name, where name is condition name and yyc prefix can be adjusted with configuration re2c:condprefix. Different lexers are separated with a comment /* *********************************** */ which can be adjusted with configuration re2c:cond:divider.
Furthermore, each condition has a unique identifier of the form yycname, where name is condition name and yyc prefix can be adjusted with configuration re2c:condenumprefix. Identifiers have the type YYCONDTYPE and should be generated with /*!types:re2c*/ directive or -t --type-header option. Users shouldn't define these identifiers manually, as the order of conditions is not specified.
Before all conditions re2c generates entry code that checks the current condition identifier and transfers control flow to the start label of the active condition. After matching some rule of this condition, lexer may either transfer control flow back to the entry code (after executing the associated action and optionally setting another condition with =>), or use :=> shortcut and transition directly to the start label of another condition (skipping the action and the entry code). Configuration re2c:cond:goto allows one to change the default behavior.
Syntactically each rule must be preceded with a list of comma-separated condition names or a wildcard * enclosed in angle brackets < and >. Wildcard means "any condition" and is semantically equivalent to listing all condition names. Here regexp is a regular expression, default refers to the default rule *, and action is a block of code.
Rules with an exclamation mark ! in front of condition list have a special meaning: they have no regular expression, and the associated action is merged as an entry code to actions of normal rules. This might be a convenient place to peform a routine task that is common to all rules.
Another special form of rules with an empty condition list <> and no regular expression allows one to specify an "entry condition" that can be used to execute code before entering the lexer. It is semantically equivalent to a condition with number zero, name 0 and an empty regular expression.
// re2c $INPUT -o $OUTPUT -ci #include <stdint.h> #include <limits.h> #include <assert.h> static const uint64_t ERROR = ~0lu; /*!types:re2c*/ template<int BASE> static void adddgt(uint64_t &u, unsigned int d) {
u = u * BASE + d;
if (u > UINT32_MAX) u = ERROR; } static uint64_t parse_u32(const char *s) {
const char *YYMARKER;
int c = yycinit;
uint64_t u = 0;
/*!re2c
re2c:yyfill:enable = 0;
re2c:api:style = free-form;
re2c:define:YYCTYPE = char;
re2c:define:YYCURSOR = s;
re2c:define:YYGETCONDITION = "c";
re2c:define:YYSETCONDITION = "c = @@;";
<*> * { return ERROR; }
<init> '0b' / [01] :=> bin
<init> "0" :=> oct
<init> "" / [1-9] :=> dec
<init> '0x' / [0-9a-fA-F] :=> hex
<bin, oct, dec, hex> "\x00" { return u; }
<bin> [01] { adddgt<2> (u, s[-1] - '0'); goto yyc_bin; }
<oct> [0-7] { adddgt<8> (u, s[-1] - '0'); goto yyc_oct; }
<dec> [0-9] { adddgt<10>(u, s[-1] - '0'); goto yyc_dec; }
<hex> [0-9] { adddgt<16>(u, s[-1] - '0'); goto yyc_hex; }
<hex> [a-f] { adddgt<16>(u, s[-1] - 'a' + 10); goto yyc_hex; }
<hex> [A-F] { adddgt<16>(u, s[-1] - 'A' + 10); goto yyc_hex; }
*/ } int main() {
assert(parse_u32("1234567890") == 1234567890);
assert(parse_u32("0b1101") == 13);
assert(parse_u32("0x7Fe") == 2046);
assert(parse_u32("0644") == 420);
assert(parse_u32("9999999999") == ERROR);
assert(parse_u32("") == ERROR);
return 0; }
With the -S, --skeleton option, re2c ignores all non-re2c code and generates a self-contained C program that can be further compiled and executed. The program consists of lexer code and input data. For each constructed DFA (block or condition) re2c generates a standalone lexer and two files: an .input file with strings derived from the DFA and a .keys file with expected match results. The program runs each lexer on the corresponding .input file and compares results with the expectations. Skeleton programs are very useful for a number of reasons:
The difficulty with generating input data is that for all but the most trivial cases the number of possible input strings is too large (even if the string length is limited). Re2c solves this difficulty by generating sufficiently many strings to cover almost all DFA transitions. It uses the following algorithm. First, it constructs a skeleton of the DFA. For encodings with 1-byte code unit size (such as ASCII, UTF-8 and EBCDIC) skeleton is just an exact copy of the original DFA. For encodings with multibyte code units skeleton is a copy of DFA with certain transitions omitted: namely, re2c takes at most 256 code units for each disjoint continuous range that corresponds to a DFA transition. The chosen values are evenly distributed and include range bounds. Instead of trying to cover all possible paths in the skeleton (which is infeasible) re2c generates sufficiently many paths to cover all skeleton transitions, and thus trigger the corresponding conditional jumps in the lexer. The algorithm implementation is limited by ~1Gb of transitions and consumes constant amount of memory (re2c writes data to file as soon as it is generated).
With the -D, --emit-dot option, re2c does not generate code. Instead, it dumps the generated DFA in DOT format. One can convert this dump to an image of the DFA using Graphviz or another library. Note that this option shows the final DFA after it has gone through a number of optimizations and transformations. Earlier stages can be dumped with various debug options, such as --dump-nfa, --dump-dfa-raw etc. (see the full list of options).
You can find more information about re2c at the official website: http://re2c.org. Similar programs are flex(1), lex(1), quex(http://quex.sourceforge.net).
Re2c was originaly written by Peter Bumbulis in 1993. Since then it has been developed and maintained by multiple volunteers; mots notably, Brain Young, Marcus Boerger, Dan Nuffer and Ulya Trofimovich.