RE2C(1) | RE2C(1) |
re2c - compile regular expressions to code
re2c [OPTIONS] INPUT [-o OUTPUT]
re2go [OPTIONS] INPUT [-o OUTPUT]
re2c is a tool for generating fast lexical analyzers for C, C++ and Go.
Note: This manual includes examples for Go, but it refers to re2c (rather than re2go) as the name of the program in general.
A re2c program consists of normal code intermixed with re2c blocks and directives. Each re2c block may contain definitions, configurations and rules. Definitions are of the form name = regexp; where name is an identifier that consists of letters, digits and underscores, and regexp is a regular expression. Regular expressions may contain other definitions, but recursion is not allowed and each name should be defined before used. Configurations are of the form re2c:config = value; where config is the configuration descriptor and value can be a number, a string or a special word. Rules consist of a regular expression followed by a semantic action (a block of code enclosed in curly braces { and }, or a raw one line of code preceded with := and ended with a newline that is not followed by a whitespace). If the input matches the regular expression, the associated semantic action is executed. If multiple rules match, the longest match takes precedence. If multiple rules match the same string, the earlier rule takes precedence. There are two special rules: default rule * and EOF rule $. Default rule should always be defined, it has the lowest priority regardless of its place and matches any code unit (not necessarily a valid character, see encoding support). EOF rule matches the end of input, it should be defined if the corresponding EOF handling method is used. If start conditions are used, rules have more complex syntax. All rules of a single block are compiled into a deterministic finite-state automaton (DFA) and encoded in the form of a program in the target language. The generated code interfaces with the outer program by the means of a few user-defined primitives (see the program interface section). Reusable blocks allow sharing rules, definitions and configurations between different blocks.
//go:generate re2go $INPUT -o $OUTPUT -i package main //
// func lex(str string) int { // Go code
var cursor int //
/*!re2c // start of re2c block
re2c:define:YYCTYPE = byte; // configuration
re2c:define:YYPEEK = "str[cursor]"; // configuration
re2c:define:YYSKIP = "cursor += 1"; // configuration
re2c:yyfill:enable = 0; // configuration
re2c:flags:nested-ifs = 1; // configuration
//
number = [1-9][0-9]*; // named definition
//
number { return 0; } // normal rule
* { return 1; } // default rule
*/ } //
// func main() { //
if lex("1234\x00") != 0 { // Go code
panic("failed!") //
} // } //
// Code generated by re2c, DO NOT EDIT. //go:generate re2go $INPUT -o $OUTPUT -i package main //
// func lex(str string) int { // Go code
var cursor int //
{
var yych byte
yych = str[cursor]
if (yych <= '0') {
goto yy2
}
if (yych <= '9') {
goto yy4
} yy2:
cursor += 1
{ return 1; } yy4:
cursor += 1
yych = str[cursor]
if (yych <= '/') {
goto yy6
}
if (yych <= '9') {
goto yy4
} yy6:
{ return 0; } } } //
// func main() { //
if lex("1234\x00") != 0 { // Go code
panic("failed!") //
} // } //
Re2c has a flexible interface that gives the user both the freedom and the responsibility to define how the generated code interacts with the outer program. There are two major options:
Generic API has two styles:
#define YYPEEK() *YYCURSOR #define YYSKIP() ++YYCURSOR #define YYBACKUP() YYMARKER = YYCURSOR #define YYBACKUPCTX() YYCTXMARKER = YYCURSOR #define YYRESTORE() YYCURSOR = YYMARKER #define YYRESTORECTX() YYCURSOR = YYCTXMARKER #define YYRESTORETAG(tag) YYCURSOR = tag #define YYLESSTHAN(len) YYLIMIT - YYCURSOR < len #define YYSTAGP(tag) tag = YYCURSOR #define YYSTAGN(tag) tag = NULL #define YYSHIFT(shift) YYCURSOR += shift #define YYSHIFTSTAG(tag, shift) tag += shift
re2c:define:YYPEEK = "*YYCURSOR"; re2c:define:YYSKIP = "++YYCURSOR"; re2c:define:YYBACKUP = "YYMARKER = YYCURSOR"; re2c:define:YYBACKUPCTX = "YYCTXMARKER = YYCURSOR"; re2c:define:YYRESTORE = "YYCURSOR = YYMARKER"; re2c:define:YYRESTORECTX = "YYCURSOR = YYCTXMARKER"; re2c:define:YYRESTORETAG = "YYCURSOR = ${tag}"; re2c:define:YYLESSTHAN = "YYLIMIT - YYCURSOR < @@{len}"; re2c:define:YYSTAGP = "@@{tag} = YYCURSOR"; re2c:define:YYSTAGN = "@@{tag} = NULL"; re2c:define:YYSHIFT = "YYCURSOR += @@{shift}"; re2c:define:YYSHIFTSTAG = "@@{tag} += @@{shift}";
Here is a list of API primitives that may be used by the generated code in order to interface with the outer program. Which primitives are needed depends on multiple factors, including the complexity of regular expressions, input representation, buffering, the use of various features and so on. All the necessary primitives should be defined by the user in the form of macros, functions, variables, free-form pieces of code or any other suitable form. Re2c does not (and cannot) check the definitions, so if anything is missing or defined incorrectly the generated code will not compile.
Below is the list of all directives provided by re2c (in no particular order). More information on each directive can be found in the related sections.
re2c uses the following syntax for regular expressions:
Character classes and string literals may contain the following escape sequences: \a, \b, \f, \n, \r, \t, \v, \\, octal escapes \ooo and hexadecimal escapes \xhh, \uhhhh and \Uhhhhhhhh.
Re2c provides a number of ways to handle end-of-input situation. Which way to use depends on the complexity of regular expressions, performance considerations, the need for input buffering and various other factors. EOF handling is probably the most complex part of re2c user interface --- it definitely requires a bit of understanding of how the generated lexer works. But in return is allows the user to customize lexer for a particular environment and avoid the unnecessary overhead of generic methods when a simpler method is sufficient. Roughly speaking, there are four main methods:
This is the simplest and the most efficient method. It is applicable in cases when the input is small enough to fit into a continuous memory buffer and there is a natural "sentinel" symbol --- a code unit that is not allowed by any of the regular expressions in grammar (except possibly as a terminating character). Sentinel symbol never appears in well-formed input, therefore it can be appended at the end of input and used as a stop signal by the lexer. A good example of such input is a null-terminated C-string, provided that the grammar does not allow NULL in the middle of lexemes. Sentinel method is very efficient, because the lexer does not need to perform any additional checks for the end of input --- it comes naturally as a part of processing the next character. It is very important that the sentinel symbol is not allowed in the middle of the rule --- otherwise on some inputs the lexer may read past the end of buffer and crash or cause memory corruption. Re2c verifies this automatically. Use re2c:sentinel configuration to specify which sentinel symbol is used.
Below is an example of using sentinel method. Configuration re2c:yyfill:enable = 0; suppresses generation of end-of-input checks and YYFILL calls.
//go:generate re2go $INPUT -o $OUTPUT package main import "testing" // expect a null-terminated string func lex(str string) int {
var cursor int
count := 0 loop:
/*!re2c
re2c:yyfill:enable = 0;
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "str[cursor]";
re2c:define:YYSKIP = "cursor += 1";
* { return -1 }
[\x00] { return count }
[a-z]+ { count += 1; goto loop }
[ ]+ { goto loop }
*/ } func TestLex(t *testing.T) {
var tests = []struct {
res int
str string
}{
{0, "\000"},
{3, "one two three\000"},
{-1, "f0ur\000"},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
res := lex(x.str)
if res != x.res {
t.Errorf("got %d, want %d", res, x.res)
}
})
} }
Bounds checking is a generic method: it can be used with any input grammar. The basic idea is simple: we need to check for the end of input before reading the next input character. However, if implemented in a straightforward way, this would be quite inefficient: checking on each input character would cause a major slowdown. Re2c avoids slowdown by generating checks only in certain key states of the lexer, and letting it run without checks in-between the key states. More precisely, re2c computes strongly connected components (SCCs) of the underlying DFA (which roughly correspond to loops), and generates only a few checks per each SCC (usually just one, but in general enough to make the SCC acyclic). The check is of the form (YYLIMIT - YYCURSOR) < n, where n is the maximal length of a simple path in the corresponding SCC. If this condiiton is true, the lexer calls YYFILL(n), which must either supply at least n input characters, or do not return. When the lexer continues after the check, it is certain that the next n characters can be read safely without checks.
This approach reduces the number of checks significantly (and makes the lexer much faster as a result), but it has a downside. Since the lexer checks for multiple characters at once, it may end up in a situation when there are a few remaining input characters (less than n) corresponding to a short path in the SCC, but the lexer cannot proceed because of the check, and YYFILL cannot supply more character because it is the end of input. To solve this problem, re2c requires that additional padding consisting of fake characters is appended at the end of input. The length of padding should be YYMAXFILL, which equals to the maximum n parameter to YYFILL and must be generated by re2c using /*!max:re2c*/ directive. The fake characters should not form a valid lexeme suffix, otherwise the lexer may be fooled into matching a fake lexeme. Usually it's a good idea to use NULL characters for padding.
Below is an example of using bounds checking with padding. Note that the grammar rule for single-quoted strings allows arbitrary symbols in the middle of lexeme, so there is no natural sentinel in the grammar. Strings like "aha\0ha" are perfectly valid, but ill-formed strings like "aha\0 are also possible and shouldn’t crash the lexer. In this example we do not use buffer refilling, therefore YYFILL definition simply returns an error. Note that YYFILL will only be called after the lexer reaches padding, because only then will the check condition be satisfied.
//go:generate re2go $INPUT -o $OUTPUT package main import (
"strings"
"testing" ) /*!max:re2c*/ // Expects YYMAXFILL-padded string. func lex(str string) int {
var cursor int
limit := len(str)
count := 0 loop:
/*!re2c
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "str[cursor]";
re2c:define:YYSKIP = "cursor += 1";
re2c:define:YYLESSTHAN = "limit - cursor < @@{len}";
re2c:define:YYFILL = "return -1";
* { return -1 }
[\x00] { return count }
['] ([^'\\] | [\\][^])* ['] { count += 1; goto loop }
[ ]+ { goto loop }
*/ } // Pad string with YYMAXFILL zeroes at the end. func pad(str string) string {
return str + strings.Repeat("\000", YYMAXFILL) } func TestLex(t *testing.T) {
var tests = []struct {
res int
str string
}{
{0, ""},
{3, "'qu\000tes' 'are' 'fine: \\'' "},
{-1, "'unterminated\\'"},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
res := lex(pad(x.str))
if res != x.res {
t.Errorf("got %d, want %d", res, x.res)
}
})
} }
EOF rule $ was introduced in version 1.2. It is a hybrid approach that tries to take the best of both worlds: simplicity and efficiency of the sentinel method combined with the generality of bounds-checking method. The idea is to appoint an arbitrary symbol to be the sentinel, and only perform further bounds checking if the sentinel symbol matches (more precisely, if the symbol class that contains it matches). The check is of the form YYLIMIT <= YYCURSOR. If this condition is not satisfied, then the sentinel is just an ordinary input character and the lexer continues. Otherwise this is a real sentinel, and the lexer calls YYFILL(). If YYFILL returns zero, the lexer assumes that it has more input and tries to re-match. Otherwise YYFILL returns non-zero and the lexer knows that it has reached the end of input. At this point there are three possibilities. First, it might have already matched a shorter lexeme --- in this case it just rolls back to the last accepting state. Second, it might have consumed some characters, but failed to match --- in this case it falls back to default rule *. Finally, it might be in the initial state --- in this (and only this!) case it matches EOF rule $.
Below is an example of using EOF rule. Configuration re2c:yyfill:enable = 0; suppresses generation of YYFILL calls (but not the bounds checks).
//go:generate re2go $INPUT -o $OUTPUT package main import "testing" // Expects a null-terminated string. func lex(str string) int {
var cursor, marker int
limit := len(str) - 1 // limit points at the terminating null
count := 0 loop:
/*!re2c
re2c:yyfill:enable = 0;
re2c:eof = 0;
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "str[cursor]";
re2c:define:YYSKIP = "cursor += 1";
re2c:define:YYBACKUP = "marker = cursor";
re2c:define:YYRESTORE = "cursor = marker";
re2c:define:YYLESSTHAN = "limit <= cursor";
* { return -1 }
$ { return count }
['] ([^'\\] | [\\][^])* ['] { count += 1; goto loop }
[ ]+ { goto loop }
*/ } func TestLex(t *testing.T) {
var tests = []struct {
res int
str string
}{
{0, "\000"},
{3, "'qu\000tes' 'are' 'fine: \\'' \000"},
{-1, "'unterminated\\'\000"},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
res := lex(x.str)
if res != x.res {
t.Errorf("got %d, want %d", res, x.res)
}
})
} }
Generic API can be used with any of the above methods. It also allows one to use a user-defined method by placing EOF checks in one of the basic primitives. Usually this is either YYSKIP (the check is performed when advancing to the next input character), or YYPEEK (the check is performed when reading the next input character). The resulting methods are inefficient, as they check on each input character. However, they can be useful in cases when the input cannot be buffered or padded and does not contain a sentinel character at the end. One should be cautious when using such ad-hoc methods, as it is easy to overlook some corner cases and come up with a method that only partially works. Also it should be noted that not everything can be expressed via generic API: for example, it is impossible to reimplement the way EOF rule works (in particular, it is impossible to re-match the character after successful YYFILL).
Below is an example of using YYSKIP to perform bounds checking without padding. YYFILL generation is suppressed using re2c:yyfill:enable = 0; configuration. Note that if the grammar was more complex, this method might not work in case when two rules overlap and EOF check fails after a shorter lexeme has already been matched (as it happens in our example, there are no overlapping rules).
//go:generate re2go $INPUT -o $OUTPUT package main import "testing" // Returns "fake" terminating null if cursor has reached limit. func peek(str string, cursor int, limit int) byte {
if cursor >= limit {
return 0 // fake null
} else {
return str[cursor]
} } // Expects a string without terminating null. func lex(str string) int {
var cursor, marker int
limit := len(str)
count := 0 loop:
/*!re2c
re2c:yyfill:enable = 0;
re2c:eof = 0;
re2c:define:YYCTYPE = byte;
re2c:define:YYLESSTHAN = "cursor >= limit";
re2c:define:YYPEEK = "peek(str, cursor, limit)";
re2c:define:YYSKIP = "cursor += 1";
re2c:define:YYBACKUP = "marker = cursor";
re2c:define:YYRESTORE = "cursor = marker";
* { return -1 }
$ { return count }
['] ([^'\\] | [\\][^])* ['] { count += 1; goto loop }
[ ]+ { goto loop }
*/ } func TestLex(t *testing.T) {
var tests = []struct {
res int
str string
}{
{0, ""},
{3, "'qu\000tes' 'are' 'fine: \\'' "},
{-1, "'unterminated\\'"},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
res := lex(x.str)
if res != x.res {
t.Errorf("got %d, want %d", res, x.res)
}
})
} }
The need for buffering arises when the input cannot be mapped in memory all at once: either it is too large, or it comes in a streaming fashion (like reading from a socket). The usual technique in such cases is to allocate a fixed-sized memory buffer and process input in chunks that fit into the buffer. When the current chunk is processed, it is moved out and new data is moved in. In practice it is somewhat more complex, because lexer state consists not of a single input position, but a set of interrelated posiitons:
Not all these are used in every case, but if used, they must be updated by YYFILL. All active positions are contained in the segment between token and cursor, therefore everything between buffer start and token can be discarded, the segment from token and up to limit should be moved to the beginning of buffer, and the free space at the end of buffer should be filled with new data. In order to avoid frequent YYFILL calls it is best to fill in as many input characters as possible (even though fewer characters might suffice to resume the lexer). The details of YYFILL implementation are slightly different depending on which EOF handling method is used: the case of EOF rule is somewhat simpler than the case of bounds-checking with padding. Also note that if -f --storable-state option is used, YYFILL has slightly different semantics (desrbed in the section about storable state).
If EOF rule is used, YYFILL is a function-like primitive that accepts no arguments and returns a value which is checked against zero. YYFILL invocation is triggered by condition YYLIMIT <= YYCURSOR in default API and YYLESSTHAN() in generic API. A non-zero return value means that YYFILL has failed. A successful YYFILL call must supply at least one character and adjust input positions accordingly. Limit must always be set to one after the last input position in buffer, and the character at the limit position must be the sentinel symbol specified by re2c:eof configuration. The pictures below show the relative locations of input positions in buffer before and after YYFILL call (sentinel symbol is marked with #, and the second picture shows the case when there is not enough input to fill the whole buffer).
<-- shift -->
>-A------------B---------C-------------D#-----------E->
buffer token marker limit,
cursor >-A------------B---------C-------------D------------E#->
buffer, marker cursor limit
token
<-- shift -->
>-A------------B---------C-------------D#--E (EOF)
buffer token marker limit,
cursor >-A------------B---------C-------------D---E#........
buffer, marker cursor limit
token
Here is an example of a program that reads input file input.txt in chunks of 4096 bytes and uses EOF rule.
//go:generate re2go $INPUT -o $OUTPUT package main import (
"os"
"testing" ) // Intentionally small to trigger buffer refill. const SIZE int = 16 type Input struct {
file *os.File
data []byte
cursor int
marker int
token int
limit int
eof bool } func fill(in *Input) int {
// If nothing can be read, fail.
if in.eof {
return 1
}
// Check if at least some space can be freed.
if in.token == 0 {
// In real life can reallocate a larger buffer.
panic("fill error: lexeme too long")
}
// Discard everything up to the start of the current lexeme,
// shift buffer contents and adjust offsets.
copy(in.data[0:], in.data[in.token:in.limit])
in.cursor -= in.token
in.marker -= in.token
in.limit -= in.token
in.token = 0
// Read new data (as much as possible to fill the buffer).
n, _ := in.file.Read(in.data[in.limit:SIZE])
in.limit += n
in.data[in.limit] = 0
// If read less than expected, this is the end of input.
in.eof = in.limit < SIZE
// If nothing has been read, fail.
if n == 0 {
return 1
}
return 0 } func lex(in *Input) int {
count := 0 loop:
in.token = in.cursor
/*!re2c
re2c:eof = 0;
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "in.data[in.cursor]";
re2c:define:YYSKIP = "in.cursor += 1";
re2c:define:YYBACKUP = "in.marker = in.cursor";
re2c:define:YYRESTORE = "in.cursor = in.marker";
re2c:define:YYLESSTHAN = "in.limit <= in.cursor";
re2c:define:YYFILL = "fill(in) == 0";
* { return -1 }
$ { return count }
['] ([^'\\] | [\\][^])* ['] { count += 1; goto loop }
[ ]+ { goto loop }
*/ } // Prepare a file with the input text and run the lexer. func test(data string) (result int) {
tmpfile := "input.txt"
f, _ := os.Create(tmpfile)
f.WriteString(data)
f.Seek(0, 0)
defer func() {
if r := recover(); r != nil {
result = -2
}
f.Close()
os.Remove(tmpfile)
}()
in := &Input{
file: f,
data: make([]byte, SIZE+1),
cursor: SIZE,
marker: SIZE,
token: SIZE,
limit: SIZE,
eof: false,
}
return lex(in) } func TestLex(t *testing.T) {
var tests = []struct {
res int
str string
}{
{0, ""},
{2, "'one' 'two'"},
{3, "'qu\000tes' 'are' 'fine: \\'' "},
{-1, "'unterminated\\'"},
{-2, "'loooooooooooong'"},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
res := test(x.str)
if res != x.res {
t.Errorf("got %d, want %d", res, x.res)
}
})
} }
In the default case (when EOF rule is not used) YYFILL is a function-like primitive that accepts a single argument and does not return any value. YYFILL invocation is triggered by condition (YYLIMIT - YYCURSOR) < n in default API and YYLESSTHAN(n) in generic API. The argument passed to YYFILL is the minimal number of characters that must be supplied. If it fails to do so, YYFILL must not return to the lexer (for that reason it is best implemented as a macro that returns from the calling function on failure). In case of a successful YYFILL invocation the limit position must be set either to one after the last input position in buffer, or to the end of YYMAXFILL padding (in case YYFILL has successfully read at least n characters, but not enough to fill the entire buffer). The pictures below show the relative locations of input positions in buffer before and after YYFILL invocation (YYMAXFILL padding on the second picture is marked with # symbols).
<-- shift --> <-- need -->
>-A------------B---------C-----D-------E---F--------G->
buffer token marker cursor limit >-A------------B---------C-----D-------E---F--------G->
buffer, marker cursor limit
token
<-- shift --> <-- need -->
>-A------------B---------C-----D-------E-F (EOF)
buffer token marker cursor limit >-A------------B---------C-----D-------E-F###############
buffer, marker cursor limit
token <- YYMAXFILL ->
Here is an example of a program that reads input file input.txt in chunks of 4096 bytes and uses bounds-checking with padding.
//go:generate re2go $INPUT -o $OUTPUT package main import (
"fmt"
"os"
"testing" ) /*!max:re2c*/ // Intentionally small to trigger buffer refill. const SIZE int = 16 type Input struct {
file *os.File
data []byte
cursor int
marker int
token int
limit int
eof bool } func fill(in *Input, need int) int {
// End of input has already been reached, nothing to do.
if in.eof {
return -1 // Error: unexpected EOF
}
// Check if after moving the current lexeme to the beginning
// of buffer there will be enough free space.
if SIZE-(in.cursor-in.token) < need {
return -2 // Error: lexeme too long
}
// Discard everything up to the start of the current lexeme,
// shift buffer contents and adjust offsets.
copy(in.data[0:], in.data[in.token:in.limit])
in.cursor -= in.token
in.marker -= in.token
in.limit -= in.token
in.token = 0
// Read new data (as much as possible to fill the buffer).
n, _ := in.file.Read(in.data[in.limit:SIZE])
in.limit += n
// If read less than expected, this is the end of input.
in.eof = in.limit < SIZE
// If end of input, add padding so that the lexer can read
// the remaining characters at the end of buffer.
if in.eof {
for i := 0; i < YYMAXFILL; i += 1 {
in.data[in.limit+i] = 0
}
in.limit += YYMAXFILL
}
return 0 } func lex(in *Input) int {
count := 0 loop:
in.token = in.cursor
/*!re2c
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "in.data[in.cursor]";
re2c:define:YYSKIP = "in.cursor += 1";
re2c:define:YYBACKUP = "in.marker = in.cursor";
re2c:define:YYRESTORE = "in.cursor = in.marker";
re2c:define:YYLESSTHAN = "in.limit-in.cursor < @@{len}";
re2c:define:YYFILL = "if r := fill(in, @@{len}); r != 0 { return r }";
* { return -1 }
[\x00] { return count }
['] ([^'\\] | [\\][^])* ['] { count += 1; goto loop }
[ ]+ { goto loop }
*/ } // Prepare a file with the input text and run the lexer. func test(data string) (result int) {
tmpfile := "input.txt"
f, _ := os.Create(tmpfile)
f.WriteString(data)
f.Seek(0, 0)
defer func() {
if r := recover(); r != nil {
fmt.Println(r)
result = -2
}
f.Close()
os.Remove(tmpfile)
}()
in := &Input{
file: f,
data: make([]byte, SIZE+YYMAXFILL),
cursor: SIZE,
marker: SIZE,
token: SIZE,
limit: SIZE,
eof: false,
}
return lex(in) } func TestLex(t *testing.T) {
var tests = []struct {
res int
str string
}{
{0, ""},
{2, "'one' 'two'"},
{3, "'qu\000tes' 'are' 'fine: \\'' "},
{-1, "'unterminated\\'"},
{-2, "'loooooooooooong'"},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
res := test(x.str)
if res != x.res {
t.Errorf("got %d, want %d", res, x.res)
}
})
} }
Re2c allows one to include other files using directive /*!include:re2c FILE */, where FILE is the name of file to be included. Re2c looks for included files in the directory of the including file and in include locations, which can be specified with -I option. Re2c include directive works in the same way as C/C++ #include: the contents of FILE are copy-pasted verbatim in place of the directive. Include files may have further includes of their own. Re2c provides some predefined include files that can be found in the include/ subdirectory of the project. These files contain definitions that can be useful to other projects (such as Unicode categories) and form something like a standard library for re2c. Here is an example:
const (
ResultOk = iota
ResultFail ) /*!re2c
number = [1-9][0-9]*; */
//go:generate re2go -c $INPUT -o $OUTPUT -i package main import "testing" /*!include:re2c "definitions.go" */ func lex(str string) int {
var cursor int
/*!re2c
re2c:yyfill:enable = 0;
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "str[cursor]";
re2c:define:YYSKIP = "cursor += 1";
number { return ResultOk }
* { return ResultFail }
*/ } func TestLex(t *testing.T) {
if lex("123\000") != ResultOk {
t.Errorf("error")
} }
Re2c allows one to generate header file from the input .re file using option -t, --type-header or configuration re2c:flags:type-header and directives /*!header:re2c:on*/ and /*!header:re2c:off*/. The first directive marks the beginning of header file, and the second directive marks the end of it. Everything between these directives is processed by re2c, and the generated code is written to the file specified by the -t --type-header option (or stdout if this option was not used). Autogenerated header file may be needed in cases when re2c is used to generate definitions of constants, variables and structs that must be visible from other translation units.
Here is an example of generating a header file that contains definition of the lexer state with tag variables (the number variables depends on the regular grammar and is unknown to the programmer).
//go:generate re2go $INPUT -o $OUTPUT -i --type-header src/lexer/lexer.go package main import (
"lexer" // generated by re2c
"testing" ) /*!header:re2c:on*/ package lexer type State struct {
Data string
Cur, Mar, /*!stags:re2c format="@@{tag}"; separator=", "; */ int } /*!header:re2c:off*/ func lex(st *lexer.State) int {
/*!re2c
re2c:flags:type-header = "src/lexer/lexer.go";
re2c:yyfill:enable = 0;
re2c:flags:tags = 1;
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "st.Data[st.Cur]";
re2c:define:YYSKIP = "st.Cur++";
re2c:define:YYBACKUP = "st.Mar = st.Cur";
re2c:define:YYRESTORE = "st.Cur = st.Mar";
re2c:define:YYRESTORETAG = "st.Cur = @@{tag}";
re2c:define:YYSTAGP = "@@{tag} = st.Cur";
re2c:tags:expression = "st.@@{tag}";
re2c:tags:prefix = "Tag";
[x]{1,4} / [x]{3,5} { return 0 } // ambiguous trailing context
* { return 1 }
*/ } func TestLex(t *testing.T) {
st := &lexer.State{
Data: "xxxxxxxx\x00",
}
if !(lex(st) == 0 && st.Cur == 4) {
t.Error("failed")
} }
// Code generated by re2c, DO NOT EDIT. package lexer type State struct {
Data string
Cur, Mar, Tag1, Tag2, Tag3 int }
Re2c has two options for submatch extraction.
The first option is -T --tags. With this option one can use standalone tags of the form @stag and #mtag, where stag and mtag are arbitrary used-defined names. Tags can be used anywhere inside of a regular expression; semantically they are just position markers. Tags of the form @stag are called s-tags: they denote a single submatch value (the last input position where this tag matched). Tags of the form #mtag are called m-tags: they denote multiple submatch values (the whole history of repetitions of this tag). All tags should be defined by the user as variables with the corresponding names. With standalone tags re2c uses leftmost greedy disambiguation: submatch positions correspond to the leftmost matching path through the regular expression.
The second option is -P --posix-captures: it enables POSIX-compliant capturing groups. In this mode parentheses in regular expressions denote the beginning and the end of capturing groups; the whole regular expression is group number zero. The number of groups for the matching rule is stored in a variable yynmatch, and submatch results are stored in yypmatch array. Both yynmatch and yypmatch should be defined by the user, and yypmatch size must be at least [yynmatch * 2]. Re2c provides a directive /*!maxnmatch:re2c*/ that defines YYMAXNMATCH: a constant equal to the maximal value of yynmatch among all rules. Note that re2c implements POSIX-compliant disambiguation: each subexpression matches as long as possible, and subexpressions that start earlier in regular expression have priority over those starting later. Capturing groups are translated into s-tags under the hood, therefore we use the word "tag" to describe them as well.
With both -P --posix-captures and T --tags options re2c uses efficient submatch extraction algorithm described in the Tagged Deterministic Finite Automata with Lookahead paper. The overhead on submatch extraction in the generated lexer grows with the number of tags --- if this number is moderate, the overhead is barely noticeable. In the lexer tags are implemented using a number of tag variables generated by re2c. There is no one-to-one correspondence between tag variables and tags: a single variable may be reused for different tags, and one tag may require multiple variables to hold all its ambiguous values. Eventually ambiguity is resolved, and only one final variable per tag survives. When a rule matches, all its tags are set to the values of the corresponding tag variables. The exact number of tag variables is unknown to the user; this number is determined by re2c. However, tag variables should be defined by the user as a part of the lexer state and updated by YYFILL, therefore re2c provides directives /*!stags:re2c*/ and /*!mtags:re2c*/ that can be used to declare, initialize and manipulate tag variables. These directives have two optional configurations: format = "@@"; (specifies the template where @@ is substituted with the name of each tag variable), and separator = ""; (specifies the piece of code used to join the generated pieces for different tag variables).
S-tags support the following operations:
M-tags support the following operations:
S-tags can be implemented as scalar values (pointers or offsets). M-tags need a more complex representation, as they need to store a sequence of tag values. The most naive and inefficient representation of an m-tag is a list (array, vector) of tag values; a more efficient representation is to store all m-tags in a prefix-tree represented as array of nodes (v, p), where v is tag value and p is a pointer to parent node.
Here is an example of using s-tags to parse an IPv4 address.
//go:generate re2go $INPUT -o $OUTPUT package main import (
"errors"
"testing" ) var eBadIP error = errors.New("bad IP") func lex(str string) (int, error) {
var cursor, marker, o1, o2, o3, o4 int
/*!stags:re2c format = 'var @@ int'; separator = "\n\t"; */
num := func(pos int, end int) int {
n := 0
for ; pos < end; pos++ {
n = n*10 + int(str[pos]-'0')
}
return n
}
/*!re2c
re2c:flags:tags = 1;
re2c:yyfill:enable = 0;
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "str[cursor]";
re2c:define:YYSKIP = "cursor += 1";
re2c:define:YYBACKUP = "marker = cursor";
re2c:define:YYRESTORE = "cursor = marker";
re2c:define:YYSTAGP = "@@{tag} = cursor";
re2c:define:YYSTAGN = "@@{tag} = -1";
octet = [0-9] | [1-9][0-9] | [1][0-9][0-9] | [2][0-4][0-9] | [2][5][0-5];
dot = [.];
end = [\x00];
@o1 octet dot @o2 octet dot @o3 octet dot @o4 octet end {
return num(o4, cursor-1)+
(num(o3, o4-1) << 8)+
(num(o2, o3-1) << 16)+
(num(o1, o2-1) << 24), nil
}
* { return 0, eBadIP }
*/ } func TestLex(t *testing.T) {
var tests = []struct {
str string
res int
err error
}{
{"1.2.3.4\000", 0x01020304, nil},
{"127.0.0.1\000", 0x7f000001, nil},
{"255.255.255.255\000", 0xffffffff, nil},
{"1.2.3.\000", 0, eBadIP},
{"1.2.3.256\000", 0, eBadIP},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
res, err := lex(x.str)
if !(res == x.res && err == x.err) {
t.Errorf("got %d, want %d", res, x.res)
}
})
} }
Here is an example of using POSIX capturing groups to parse an IPv4 address.
//go:generate re2go $INPUT -o $OUTPUT package main import (
"errors"
"testing" ) /*!maxnmatch:re2c*/ var eBadIP error = errors.New("bad IP") func lex(str string) (int, error) {
var cursor, marker, yynmatch int
yypmatch := make([]int, YYMAXNMATCH*2)
/*!stags:re2c format = 'var @@ int'; separator = "\n\t"; */
num := func(pos int, end int) int {
n := 0
for ; pos < end; pos++ {
n = n*10 + int(str[pos]-'0')
}
return n
}
/*!re2c
re2c:flags:posix-captures = 1;
re2c:yyfill:enable = 0;
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "str[cursor]";
re2c:define:YYSKIP = "cursor += 1";
re2c:define:YYBACKUP = "marker = cursor";
re2c:define:YYRESTORE = "cursor = marker";
re2c:define:YYSTAGP = "@@{tag} = cursor";
re2c:define:YYSTAGN = "@@{tag} = -1";
re2c:define:YYSHIFTSTAG = "@@{tag} += @@{shift}";
octet = [0-9] | [1-9][0-9] | [1][0-9][0-9] | [2][0-4][0-9] | [2][5][0-5];
dot = [.];
end = [\x00];
(octet) dot (octet) dot (octet) dot (octet) end {
if yynmatch != 5 {
panic("expected 5 submatch groups")
}
return num(yypmatch[8], yypmatch[9])+
(num(yypmatch[6], yypmatch[7]) << 8)+
(num(yypmatch[4], yypmatch[5]) << 16)+
(num(yypmatch[2], yypmatch[3]) << 24), nil
}
* { return 0, eBadIP }
*/ } func TestLex(t *testing.T) {
var tests = []struct {
str string
res int
err error
}{
{"1.2.3.4\000", 0x01020304, nil},
{"127.0.0.1\000", 0x7f000001, nil},
{"255.255.255.255\000", 0xffffffff, nil},
{"1.2.3.\000", 0, eBadIP},
{"1.2.3.256\000", 0, eBadIP},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
res, err := lex(x.str)
if !(res == x.res && err == x.err) {
t.Errorf("got %d, want %d", res, x.res)
}
})
} }
Here is an example of using m-tags to parse a semicolon-separated sequence of words (C++). Tag variables are stored in a tree that is packed in a vector.
//go:generate re2go $INPUT -o $OUTPUT package main import (
"reflect"
"testing" ) const (
mtagRoot int = -1
mtagNil int = -2 ) type mtagElem struct {
val int
pred int } type mtagTrie = []mtagElem func createTrie(capacity int) mtagTrie {
return make([]mtagElem, 0, capacity) } func mtag(trie *mtagTrie, tag int, val int) int {
*trie = append(*trie, mtagElem{val, tag})
return len(*trie) - 1 } // Recursively unwind both tag histories and consruct submatches. func unwind(trie mtagTrie, x int, y int, str string) []string {
if x == mtagRoot && y == mtagRoot {
return []string{}
} else if x == mtagRoot || y == mtagRoot {
panic("tag histories have different length")
} else {
xval := trie[x].val
yval := trie[y].val
ss := unwind(trie, trie[x].pred, trie[y].pred, str)
// Either both tags should be nil, or none of them.
if xval == mtagNil && yval == mtagNil {
return ss
} else if xval == mtagNil || yval == mtagNil {
panic("tag histories positive/negative tag mismatch")
} else {
s := str[xval:yval]
return append(ss, s)
}
} } func lex(str string) []string {
var cursor, marker int
trie := createTrie(256)
x := mtagRoot
y := mtagRoot
/*!mtags:re2c format = "@@ := mtagRoot"; separator = "\n\t"; */
/*!re2c
re2c:flags:tags = 1;
re2c:yyfill:enable = 0;
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "str[cursor]";
re2c:define:YYSKIP = "cursor += 1";
re2c:define:YYBACKUP = "marker = cursor";
re2c:define:YYRESTORE = "cursor = marker";
re2c:define:YYMTAGP = "@@{tag} = mtag(&trie, @@{tag}, cursor)";
re2c:define:YYMTAGN = "@@{tag} = mtag(&trie, @@{tag}, mtagNil)";
end = [\x00];
(#x [a-z]+ #y [;])* end { return unwind(trie, x, y, str) }
* { return nil }
*/ } func TestLex(t *testing.T) {
var tests = []struct {
str string
res []string
}{
{"\000", []string{}},
{"one;two;three;\000", []string{"one", "two", "three"}},
{"one;two\000", nil},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
res := lex(x.str)
if !reflect.DeepEqual(res, x.res) {
t.Errorf("got %v, want %v", res, x.res)
}
})
} }
With -f --storable-state option re2c generates a lexer that can store its current state, return to the caller, and later resume operations exactly where it left off. The default mode of operation in re2c is a "pull" model, in which the lexer "pulls" more input whenever it needs it. This may be unacceptable in cases when the input becomes available piece by piece (for example, if the lexer is invoked by the parser, or if the lexer program communicates via a socket protocol with some other program that must wait for a reply from the lexer before it transmits the next message). Storable state feature is intended exactly for such cases: it allows one to generate lexers that work in a "push" model. When the lexer needs more input, it stores its state and returns to the caller. Later, when more input becomes available, the caller resumes the lexer exactly where it stopped. There are a few changes necessary compared to the "pull" model:
Here is an example of a "push"-model lexer that reads input from stdin and expects a sequence of words separated by spaces and newlines. The lexer loops forever, waiting for more input. It can be terminated by sending a special EOF token --- a word "stop", in which case the lexer terminates successfully and prints the number of words it has seen. Abnormal termination happens in case of a syntax error, premature end of input (without the "stop" word) or in case the buffer is too small to hold a lexeme (for example, if one of the words exceeds buffer size). Premature end of input happens in case the lexer fails to read any input while being in the initial state --- this is the only case when EOF rule matches. Note that the lexer may call YYFILL twice before terminating (and thus require hitting Ctrl+D a few times). First time YYFILL is called when the lexer expects continuation of the current greedy lexeme (either a word or a whitespace sequence). If YYFILL fails, the lexer knows that it has reached the end of the current lexeme and executes the corresponding semantic action. The action jumps to the beginning of the loop, the lexer enters the initial state and calls YYFILL once more. If it fails, the lexer matches EOF rule. (Alternatively EOF rule can be used for termination instead of a special EOF lexeme.)
//go:generate re2go -f $INPUT -o $OUTPUT package main import (
"fmt"
"os"
"testing" ) // Intentionally small to trigger buffer refill. const SIZE int = 16 type Input struct {
file *os.File
data []byte
cursor int
marker int
token int
limit int
state int
yyaccept int } const (
lexEnd = iota
lexReady
lexWaitingForInput
lexPacketBroken
lexPacketTooBig
lexCountMismatch ) func fill(in *Input) int {
if in.token == 0 {
// Error: no space can be freed.
// In real life can reallocate a larger buffer.
return lexPacketTooBig
}
// Discard everything up to the start of the current lexeme,
// shift buffer contents and adjust offsets.
copy(in.data[0:], in.data[in.token:in.limit])
in.cursor -= in.token
in.marker -= in.token
in.limit -= in.token
in.token = 0
// Read new data (as much as possible to fill the buffer).
n, _ := in.file.Read(in.data[in.limit:SIZE])
in.limit += n
in.data[in.limit] = 0 // append sentinel symbol
return lexReady } func lex(in *Input, recv *int) int {
var yych byte
/*!getstate:re2c*/ loop:
in.token = in.cursor
/*!re2c
re2c:eof = 0;
re2c:define:YYPEEK = "in.data[in.cursor]";
re2c:define:YYSKIP = "in.cursor += 1";
re2c:define:YYBACKUP = "in.marker = in.cursor";
re2c:define:YYRESTORE = "in.cursor = in.marker";
re2c:define:YYLESSTHAN = "in.limit <= in.cursor";
re2c:define:YYFILL = "return lexWaitingForInput";
re2c:define:YYGETSTATE = "in.state";
re2c:define:YYSETSTATE = "in.state = @@{state}";
packet = [a-z]+[;];
* { return lexPacketBroken }
$ { return lexEnd }
packet { *recv = *recv + 1; goto loop }
*/ } func test(packets []string) int {
fname := "pipe"
fw, _ := os.Create(fname);
fr, _ := os.Open(fname);
in := &Input{
file: fr,
data: make([]byte, SIZE+1),
cursor: SIZE,
marker: SIZE,
token: SIZE,
limit: SIZE,
state: -1,
}
// data is zero-initialized, no need to write sentinel
var status int
send := 0
recv := 0 loop:
for {
status = lex(in, &recv)
if status == lexEnd {
if send != recv {
status = lexCountMismatch
}
break loop
} else if status == lexWaitingForInput {
if send < len(packets) {
fw.WriteString(packets[send])
send += 1
}
status = fill(in)
if status != lexReady {
break loop
}
} else if status == lexPacketBroken {
break loop
} else {
panic("unexpected status")
}
}
fr.Close()
fw.Close()
os.Remove(fname)
return status } func TestLex(t *testing.T) {
var tests = []struct {
status int
packets []string
}{
{lexEnd, []string{}},
{lexEnd, []string{"zero;", "one;", "two;", "three;", "four;"}},
{lexPacketBroken, []string{"??;"}},
{lexPacketTooBig, []string{"looooooooooooong;"}},
}
for i, x := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
status := test(x.packets)
if status != x.status {
t.Errorf("got %d, want %d", status, x.status)
}
})
} }
Reuse mode is enabled with the -r --reusable option. In this mode re2c allows one to reuse definitions, configurations and rules specified by a /*!rules:re2c*/ block in subsequent /*!use:re2c*/ blocks. As of re2c-1.2 it is possible to mix such blocks with normal /*!re2c*/ blocks; prior to that re2c expects a single rules-block followed by use-blocks (normal blocks are disallowed). Use-blocks can have additional definitions, configurations and rules: they are merged to those specified by the rules-block. A very common use case for -r --reusable option is a lexer that supports multiple input encodings: lexer rules are defined once and reused multiple times with encoding-specific configurations, such as re2c:flags:utf-8.
Below is an example of a multi-encoding lexer: it reads a phrase with Unicode math symbols and accepts input either in UTF8 or in UT32. Note that the --input-encoding utf8 option allows us to write UTF8-encoded symbols in the regular expressions; without this option re2c would parse them as a plain ASCII byte sequnce (and we would have to use hexadecimal escape sequences).
//go:generate re2go $INPUT -o $OUTPUT -r --input-encoding utf8 package main import "testing" /*!rules:re2c
re2c:yyfill:enable = 0;
re2c:define:YYPEEK = "str[cursor]";
re2c:define:YYSKIP = "cursor += 1";
re2c:define:YYBACKUP = "marker = cursor";
re2c:define:YYRESTORE = "cursor = marker";
"∀x ∃y: p(x, y)" { return 0; }
* { return 1; } */ func lexUTF8(str []uint8) int {
var cursor, marker int
/*!use:re2c
re2c:flags:8 = 1;
re2c:define:YYCTYPE = uint8;
*/ } func lexUTF32(str []uint32) int {
var cursor, marker int
/*!use:re2c
re2c:flags:u = 1;
re2c:define:YYCTYPE = uint32;
*/ } func TestLexUTF8(t *testing.T) {
s_utf8 := []uint8{
0xe2, 0x88, 0x80, 0x78, 0x20, 0xe2, 0x88, 0x83, 0x79,
0x3a, 0x20, 0x70, 0x28, 0x78, 0x2c, 0x20, 0x79, 0x29};
if lexUTF8(s_utf8) != 0 {
t.Errorf("utf8 failed")
} } func TestLexUTF32(t *testing.T) {
s_utf32 := []uint32{
0x00002200, 0x00000078, 0x00000020, 0x00002203, 0x00000079,
0x0000003a, 0x00000020, 0x00000070, 0x00000028, 0x00000078,
0x0000002c, 0x00000020, 0x00000079, 0x00000029};
if lexUTF32(s_utf32) != 0 {
t.Errorf("utf32 failed")
} }
re2c supports the following encodings: ASCII (default), EBCDIC (-e), UCS-2 (-w), UTF-16 (-x), UTF-32 (-u) and UTF-8 (-8). See also inplace configuration re2c:flags.
The following concepts should be clarified when talking about encodings. A code point is an abstract number that represents a single symbol. A code unit is the smallest unit of memory, which is used in the encoded text (it corresponds to one character in the input stream). One or more code units may be needed to represent a single code point, depending on the encoding. In a fixed-length encoding, each code point is represented with an equal number of code units. In variable-length encodings, different code points can be represented with different number of code units.
In Unicode, values from range 0xD800 to 0xDFFF (surrogates) are not valid Unicode code points. Any encoded sequence of code units that would map to Unicode code points in the range 0xD800-0xDFFF, is ill-formed. The user can control how re2c treats such ill-formed sequences with the --encoding-policy <policy> switch.
For some encodings, there are code units that never occur in a valid encoded stream (e.g., 0xFF byte in UTF-8). If the generated scanner must check for invalid input, the only correct way to do so is to use the default rule (*). Note that the full range rule ([^]) won't catch invalid code units when a variable-length encoding is used ([^] means "any valid code point", whereas the default rule (*) means "any possible code unit").
Conditions are enabled with -c --conditions. This option allows one to encode multiple interrelated lexers within the same re2c block.
Each lexer corresponds to a single condition. It starts with a label of the form yyc_name, where name is condition name and yyc prefix can be adjusted with configuration re2c:condprefix. Different lexers are separated with a comment /* *********************************** */ which can be adjusted with configuration re2c:cond:divider.
Furthermore, each condition has a unique identifier of the form yycname, where name is condition name and yyc prefix can be adjusted with configuration re2c:condenumprefix. Identifiers have the type YYCONDTYPE and should be generated with /*!types:re2c*/ directive or -t --type-header option. Users shouldn't define these identifiers manually, as the order of conditions is not specified.
Before all conditions re2c generates entry code that checks the current condition identifier and transfers control flow to the start label of the active condition. After matching some rule of this condition, lexer may either transfer control flow back to the entry code (after executing the associated action and optionally setting another condition with =>), or use :=> shortcut and transition directly to the start label of another condition (skipping the action and the entry code). Configuration re2c:cond:goto allows one to change the default behavior.
Syntactically each rule must be preceded with a list of comma-separated condition names or a wildcard * enclosed in angle brackets < and >. Wildcard means "any condition" and is semantically equivalent to listing all condition names. Here regexp is a regular expression, default refers to the default rule *, and action is a block of code.
Rules with an exclamation mark ! in front of condition list have a special meaning: they have no regular expression, and the associated action is merged as an entry code to actions of normal rules. This might be a convenient place to peform a routine task that is common to all rules.
Another special form of rules with an empty condition list <> and no regular expression allows one to specify an "entry condition" that can be used to execute code before entering the lexer. It is semantically equivalent to a condition with number zero, name 0 and an empty regular expression.
//go:generate re2go -c $INPUT -o $OUTPUT -i package main import (
"errors"
"testing" ) var (
eSyntax = errors.New("syntax error")
eOverflow = errors.New("overflow error") ) /*!types:re2c*/ const u32Limit uint64 = 1<<32 func parse_u32(str string) (uint32, error) {
var cursor, marker int
result := uint64(0)
cond := yycinit
add_digit := func(base uint64, offset byte) {
result = result * base + uint64(str[cursor-1] - offset)
if result >= u32Limit {
result = u32Limit
}
}
/*!re2c
re2c:yyfill:enable = 0;
re2c:define:YYCTYPE = byte;
re2c:define:YYPEEK = "str[cursor]";
re2c:define:YYSKIP = "cursor += 1";
re2c:define:YYSHIFT = "cursor += @@{shift}";
re2c:define:YYBACKUP = "marker = cursor";
re2c:define:YYRESTORE = "cursor = marker";
re2c:define:YYGETCONDITION = "cond";
re2c:define:YYSETCONDITION = "cond = @@";
<*> * { return 0, eSyntax }
<init> '0b' / [01] :=> bin
<init> "0" :=> oct
<init> "" / [1-9] :=> dec
<init> '0x' / [0-9a-fA-F] :=> hex
<bin, oct, dec, hex> "\x00" {
if result < u32Limit {
return uint32(result), nil
} else {
return 0, eOverflow
}
}
<bin> [01] { add_digit(2, '0'); goto yyc_bin }
<oct> [0-7] { add_digit(8, '0'); goto yyc_oct }
<dec> [0-9] { add_digit(10, '0'); goto yyc_dec }
<hex> [0-9] { add_digit(16, '0'); goto yyc_hex }
<hex> [a-f] { add_digit(16, 'a'-10); goto yyc_hex }
<hex> [A-F] { add_digit(16, 'A'-10); goto yyc_hex }
*/ } func TestLex(t *testing.T) {
var tests = []struct {
num uint32
str string
err error
}{
{1234567890, "1234567890\000", nil},
{13, "0b1101\000", nil},
{0x7fe, "0x007Fe\000", nil},
{0644, "0644\000", nil},
{0, "9999999999\000", eOverflow},
{0, "123??\000", eSyntax},
}
for _, x := range tests {
t.Run(x.str, func(t *testing.T) {
num, err := parse_u32(x.str)
if !(num == x.num && err == x.err) {
t.Errorf("got %d, want %d", num, x.num)
}
})
} }
With the -S, --skeleton option, re2c ignores all non-re2c code and generates a self-contained C program that can be further compiled and executed. The program consists of lexer code and input data. For each constructed DFA (block or condition) re2c generates a standalone lexer and two files: an .input file with strings derived from the DFA and a .keys file with expected match results. The program runs each lexer on the corresponding .input file and compares results with the expectations. Skeleton programs are very useful for a number of reasons:
The difficulty with generating input data is that for all but the most trivial cases the number of possible input strings is too large (even if the string length is limited). Re2c solves this difficulty by generating sufficiently many strings to cover almost all DFA transitions. It uses the following algorithm. First, it constructs a skeleton of the DFA. For encodings with 1-byte code unit size (such as ASCII, UTF-8 and EBCDIC) skeleton is just an exact copy of the original DFA. For encodings with multibyte code units skeleton is a copy of DFA with certain transitions omitted: namely, re2c takes at most 256 code units for each disjoint continuous range that corresponds to a DFA transition. The chosen values are evenly distributed and include range bounds. Instead of trying to cover all possible paths in the skeleton (which is infeasible) re2c generates sufficiently many paths to cover all skeleton transitions, and thus trigger the corresponding conditional jumps in the lexer. The algorithm implementation is limited by ~1Gb of transitions and consumes constant amount of memory (re2c writes data to file as soon as it is generated).
With the -D, --emit-dot option, re2c does not generate code. Instead, it dumps the generated DFA in DOT format. One can convert this dump to an image of the DFA using Graphviz or another library. Note that this option shows the final DFA after it has gone through a number of optimizations and transformations. Earlier stages can be dumped with various debug options, such as --dump-nfa, --dump-dfa-raw etc. (see the full list of options).
You can find more information about re2c at the official website: http://re2c.org. Similar programs are flex(1), lex(1), quex(http://quex.sourceforge.net).
Re2c was originaly written by Peter Bumbulis in 1993. Since then it has been developed and maintained by multiple volunteers; mots notably, Brain Young, Marcus Boerger, Dan Nuffer and Ulya Trofimovich.