| sqsh(1) | sqsh-2.5 | sqsh(1) |
sqsh - Interactive database shell (version 2.5)
sqsh [[options]] [[args......]]
[ -a count ]
[ -A packet_size ]
[ -b ]
[ -B ]
[ -c [cmdend] ]
[ -C sql ]
[ -d severity ]
[ -D database ]
[ -e ]
[ -E editor ]
[ -f severity ]
[ -G tds_version ]
[ -h ]
[ -H hostname ]
[ -i filename ]
[ -I interfaces ]
[ -J charset ]
[ -k keywords ]
[ -K keytab]
[ -l debug_flags ]
[ -L var=value ]
[ -m style ]
[ -n on|off ]
[ -N appname ]
[ -o filename ]
[ -p ]
[ -P [password] ]
[ -Q query_timeout ]
[ -r [sqshrc] ]
[ -R server principal ]
[ -s colsep ]
[ -S server ]
[ -t [filter] ]
[ -T login_timeout ]
[ -U username ]
[ -v ]
[ -V [bcdimoqru]]
[ -w width ]
[ -X ]
[ -y directory ]
[ -z language ]
[ -Z [secmech|default|none] ]
Sqsh (pronounced skwish) is short for SQshelL (pronounced s-q-shell), and is intended as a replacement for the venerable 'isql' program supplied by Sybase. It came about due to years of frustration of trying to do real work with a program that was never meant to perform real work.
Sqsh is much more than a nice prompt, it is intended to provide much of the functionality provided by a good shell, such as variables, aliasing, redirection, pipes, back-grounding, job control, history, command substitution, and dynamic configuration. Also, as a by-product of the design, it is remarkably easy to extend and add functionality.
The following options may be used to adjust some of the behavior of sqsh, however a large portion of the configuration options are available only through environment variables which may be set at runtime or via a .sqshrc file.
Options may also be supplied in the SQSH environment variable. This variable is parsed prior to parsing the command line, so in most cases the command line will override the contents of the variable. Be aware that for options which are allowed to be supplied multiple times, such as -c, supplying them both in a variable and on the command line will be the same as supplying them multiple times on the command line.
TDS stands for Tabular Data Stream and is the communication protocol Sybase and Microsoft uses for Client-Server communication.
See the Kerberos Support section below for details.
sqsh takes pains to hide the password from any user trying to view the command line used to launch sqsh, so using -P should not constitute a security hole. Alternatively, you can store your default password in .sqshrc file which is not readable by anyone other than yourself. (chmod 600 .sqshrc)
For example:
sqsh -SASE1502 -Usa -P -Q30
ASE1502.sa.master.1> cd tix
ASE1502.sa.tix.1> select count(*) from E_TIX;
Open Client Message
Layer 1, Origin 2, Severity 2, Number 63
ct_results(): user api layer: internal Client Library error:
Read from the server has timed out.
ASE1502: Query or command timeout detected, command/batch cancelled
ASE1502: The client connection has detected this 1 time(s).
See also the $max_timeout variable, which controls the number of times a timeout may occur on the current connection before the session is aborted.
See the Kerberos Support section below for details.
[FILTERS]
ssl=libsybfssl.so
ssl64=libsybfssl64.so
See the Kerberos Support section below for details.
See the Kerberos Support section below for details.
Upon startup, sqsh initializes all internal environment variables, commands, and aliases to their default values, it then looks in the system-wide configuration file, /etc/sqshrc, followed by a local configuration file $HOME/.sqshrc (this may be overridden via the SQSHRC external environment variable). If this file is found it is executed just like a script would be using the -i flag.
The .sqshrc file may contain anything that could normally be typed at the prompt, however it should be noted that at the time this file is read sqsh has yet to establish a connection to the database, however most commands that perform database activity, such as \go will attempt to establish a database connection when executed (it may also prompt you for a password if necessary). Also, if database activity is required within this startup file, the \connect command (see COMMANDS, below) may be executed.
After the .sqshrc file has been executed, sqsh then parses any command line options (thus any variables set in your .sqshrc file may be overridden by command line options). Following that, if sqsh is run in interactive mode (i.e. without -i and if stdin is attached to a tty), it then looks for the file provided by the $history variable and loads the contents of that file into the history buffers. (see BUFFERS, below).
Immediately prior to establishing a connection to the database (either during startup, or by an explicit \connect or \reconnect command), the file $HOME/.sqsh_session is executed. The name of this file may be overridden using the $session variable.
When a line is first read by sqsh, the first word is separated from the line. This word is then expanded of all variables (see Variable Substitution, below), followed by command expansion (see Command Substitution, below). The first word of the resulting string is then analyzed to see if it is either a valid sqsh command or alias.
The sqsh command line follows many of the same rules as Bourne shell, allowing file redirection, pipelining, command substitution, and back-grounding via the same syntax.
Any line beginning with a # followed by a non-alphanumeric character (any character other than 0-9, a-z, A-Z, and _) causes the entire line to be ignored. Because of the possible collision with T-SQL session specific temp-table names, the line will not be ignored if the first character following the #, is alphanumeric.
Quoting is used to prevent the interpretation of special keywords or characters to sqsh, such as white-space, variable expansion, or command substitution. There are three types of quoting, escape, single-quotes, and double-quotes.
Enclosing characters in single quotes preserves the literal interpretation of each character contained within the quotes. A single quote may not appear within single quotes, even when preceded by an escape. For example:
1> \echo I can not expand '$username'
outputs
I can not expand $username
The characters \\ are used to escape the meaning (and thus prevent the interpretation) of the character immediately following them. The \ character itself may be escaped. For example:
1> \echo I can\\'t expand '$username'
outputs
I can't expand $username
The escape character may also be used to escape a new-line in order to perform a line continuation, in this case the new-line is discarded and the continued line is automatically appended to the previous line, for example:
1> \echo Hello \\
--> World!
Hello World!
Enclosing characters in double quotes preserves the literal meaning of all characters within them with the exception of $, ', and \\. A double quote may be contained within double quotes by escaping it.
1> \echo "\\"I can't deny it, I like $username\\", she said"
prints out
"I can't deny it, I like gray", she said
After a line of input has been read, sqsh attempts to expand the line of any aliases (see Aliasing, below), following that it attempts to determine if the line begins with a command keyword. Once a line has been determined to contain a command name it has three types of expansion performed to it: variable substitution, followed by command substitution respectively. Finally, if a tilde was provided on the command line, then tilde expansion will be performed and the ~ will be substituted with the corresponding HOME directory name.
1> \echo ~sybase/err.log
may result in /home/sybase/err.log for example.
1> exec sp_helpdb
2> go > ~/db.log
may result in a file /export/home/dba/db.log for example, depending on the Unix login and the exact OS you are using. Following this expansion the command line is separated into words and the command is executed.
The character $ is used to indicate variable substitution or expansion within a word. These variables may be assigned values by the \set command like so:
1> \set name=value
name may be a character or underscore followed by any combination of characters, digits, or underscore, and may not contain any special characters, such as (') and ("). The restriction on the first character being a digit is introduced because SQL allows the representation of money data types as $nn.nn where n is a digit.
value may contain anything, however if it is to include white-space, then it must be quoted (see Words & Quoting, above). Note that in order to prevent the expansion of a variable use either single quotes, or two \'s, like thus:
1> \echo \\$name
$name
Variables may be referenced in one of two ways:
$variable In this manner all characters, digits, and underscores are treated as the name of the variable until another type of character is reached (either a special character, or a white-space). ${variable} The braces are required only when variable is followed by a letter, digit, or underscore that is not to be interpreted as part of its name. Note that the same effect may be achieved using double quotes.
It should be noted that because the variables are expanded prior to breaking the command line into words, if the contents of the variable contain white spaces, they are treated as significant by the parser. In the following example:
1> \set x="1 2 3"
1> \echo $x
the \echo command receives three arguments, "1", "2", and "3", although it looks as if only one argument was passed to it. This behavior is consistent with most shells (such as csh, bourne shell, etc.).
Sqsh supports a second form of expansion called command substitution. This form of expansion substitutes a command on the command line with the output of the external UNIX command. This expansion may be achieved by placing the command line to be executed in back-quotes (`). For example:
1> \set password=`/sybase/bin/getpwd $DSQUERY`
1> \echo $password
ilikepickles
This example, the external program /sybase/bin/getpwd is executed with the current contents of the $DSQUERY environment variable, the entire expression is then replaced with the output of getpwd (ilikepickles) prior to executing the \set command. By default, the output of the substituted command is first broken into words according to the contents of the $ifs variable prior to assembling together back into the command line. So, by overriding the contents of $ifs you may affect the behavior of the substitution process.
For example:
1> \set ifs=":"
1> \echo `echo hello:how:are:you`
hello how are you
This mechanism is frequently useful for parsing input files, such as /etc/passwd into fields.
As with standard Bourne shell (and most other shells, for that matter), a command's input and output may be redirected using a special notation interpreted by the shell. The following may appear anywhere on the command line, but only redirection that is specified prior to a pipe (|) actually has any effect on the behavior of internal sqsh commands (refer to Pipes, below).
1> select * from select /* syntax error */
2> \go >/tmp/output 2>&1
This statement first redirects the standard output of the \go command to the file /tmp/output, then redirects the stderr to the same file. So, when the commands fails, the error output will be found in the file /tmp/output.
However, by changing the order of redirection, you can completely change the meaning:
1> select * from select
2> \go 2>&1 >/tmp/output
Msg 156, Level 15, State 1
Server 'SQSH_TEST', Line 1
Incorrect syntax near the keyword 'select'.
In this case, error output will be sent to stdout, while what would have gone to stdout is redirected to /tmp/output (in this case /tmp/output will be empty).
Please read the section on Background Jobs, below, for detailed info on the interaction between file redirection and background jobs.
A pipeline is a sequence of one or more commands separated by a '|', each command using the stdout of the preceding program for its own stdin. However the first command in the pipeline must be a sqsh command, and all other commands must be external (or UNIX) programs. Any sqsh command may be run through a pipeline, although for many of them (such as the \set command) it doesn't really make any sense to do this. The following is an example of a pipeline:
1> select * from syslogins
2> \go | more
This command causes the result set generated by the \go command to be sent to the more(1) program, which then sends it to your screen, pausing at each screen full of data (this is the primary reason that I wrote sqsh).
There are several peculiarities in the way in which sqsh deals with pipelines as opposed to the way in which standard Bourne shell treats them.
Everything following the first occurrence of a pipe (|) character is broken into white-space delimited words, including such special shell commands as '2>&1' and other occurrences of pipes. If there are any variables contained in these words they are expanded following the same quoting rules as described in Words & Quoting, above, with the one exception that all quotes are left in place. These words are then reassembled into a single string and shipped off to /bin/sh for processing.
In short, sqsh makes no attempt to interpret what follows the first pipe, instead it is shipped off to a "real" shell to do the work. The rationale behind this is that I was lazy and didn't feel like writing all of the same bizarre variable handling, &&'ing, ||'ing, grouping, and variable expansion rules that Bourne shell supports, and instead I let Bourne do the dirty work.
The advantage of this method is that you can do some very complex stuff after the pipeline, such as:
1> select * from syscolumns
2> \go | (cd /tmp; compress -c > sysolumns.Z)
Not that I can think of any real reason to do this...but you can if you want to.
Backgrounding provides a mechanism whereby you may run any sqsh command as a background process and continue working while it runs. Sqsh offers two types of backgrounding:
To specify that a job be run in the background, simply append a & to the end of the command line, as:
1> sp_long_arduous_proc 1, 30
2> \go &
Job #1 running [xxxx]
1>
When sqsh encounters the & on the end of the command line it spawns a child process (with a Unix process id of xxxx) then the child process calls the \go. \go command then establishes a new connection to the database (using the current values of the $DSQUERY, $username, $password variables) and executes the shown query. While the job is executing the commands \jobs, \wait and \kill may be used to monitor or alter a currently running job (see section COMMANDS, below). When any job completes sqsh will display a notification, such as:
1> select count(*) from <return>
Job #1 complete (output pending)
2>
When a job completes, if it had no output, it is immediately considered terminated and will not show up in the current list of running jobs. However if the complete job has pending output, it will continue to be displayed as a running job (with the \jobs command) until a \show is used to display the output of the job. When you exit your parent sqsh session and there are background jobs active then a message is shown: You have running jobs or pending job output. You have to process all the jobs first before being able to exit sqsh.
There is a known bug with job backgrounding when used in conjunction with pipes, please refer to the BUGS section at the end of the manual.
In normal isql only two buffers are maintained; the buffer into which you are currently typing, and a buffer that contains the last batch executed (this is kept around for when you run 'vi', or 'edit').
Sqsh maintains several distinct sets of buffers:
Work Buffer: This buffer corresponds directly to the isql work buffer. It is the buffer into which you enter the current batch prior to sending it to the database.
History Buffer: This is actually a chain of 0 or more buffers (configurable by the $histsize variable) of the last $histsize batches that have been run. This buffer is only maintained when sqsh is run in interactive mode; that is, batches executed using the -i flag, or executed via redirection from the UNIX prompt will not be maintained in history (after all, they are already in a file somewhere).
If the variable $histsave is True (see section SPECIAL VARIABLES) and sqsh is in interactive mode, then the current history buffer is written to $HOME/.sqsh_history when you exit. This file is then read back into sqsh the next time it is started in interactive mode.
Named Buffers: At any time during a session the Work Buffer, or any of the History Buffers may be copied into a named buffer using the \buf-copy command (see section COMMANDS, below). These buffers are lost when you exit (however you may use the \buf-save command to save named buffers to a file).
Buffer Short-Hand
Many commands allow all of these buffers to be referenced in a short-hand fashion, very similar to the way that csh(1) references its commands history. Any of these shorthands may be used for any buffer parameter described in the COMMANDS section:
Variables
Variables may also be contained within work buffers. Under these circumstances the variables remain unexpanded until the buffer is sent to the database (via the \go command), during which time they are expanded and replaced within the buffer. This behavior may be altered via the $expand variable. (see Special Variables, below).
The following is an example of using variables within a buffer:
1> \set table_name=syscolumns
1> select count(*) from $table_name
2> \go
This is the equivalent of performing the query:
1> select count(*) from syscolumns
2> \go
directly. Typically this feature is useful for reusing large complex where clauses, or long column names.
Quoting rules apply the same in SQL buffers as they do in command lines. That is, any variables contained within double quotes (") are expanded and variables contained within single quotes (') are left untouched. Thus:
1> select "$username", '$username'
2> \go
yields the results
---- ---------
gray $username
Command Substitution
As with the command line, the output of UNIX commands may also be substituted within a SQL buffer upon execution (once again, only if the $expand variable is set to 1, or true). In this circumstance the command contained within back quotes (`) is replaced with its output prior to forwarding the buffer to SQL server. For example:
1> select count(*) from `echo syscolumns`
2> \go
Causes the strings 'echo syscolumns' to be replaced by the word syscolumns prior to executing the command. It should be noted that the contents of the substituted command are only executed at the time of the \go command, not when the line of SQL is input.
New with version 2.0 of sqsh, is the ability to perform basic flow-of-control and functions using the \if, \while, \do, and \func commands.
Blocks & SQL Buffers
All sqsh flow-of-control commands are block-based. That is, if the test expression of the command is met, then a block of sqsh-script will be executed. For example, the definition of the \if command is:
\if expression
block
\fi
This block may be any number of lines of sqsh commands, SQL, or flow-of-control statements to be executed if the expression evaluates to a success condition (0).
Each block has its own SQL buffer for the duration that the block is executed. That is, the following statements:
1> /*
2> ** IMPROPER USAGE OF IF BLOCK
3> */
4> select count(*) from
5> \if [ $x -gt 10 ]
6> sysobjects
7> \else
8> sysindexes
9> \fi
5> go
will yield:
Msg 102, Level 15, State 1
Server 'bps_pro', Line 1
Incorrect syntax near 'from'
because the string 'sysobjects' or 'sysindexes' were inserted into their own SQL buffers. These buffers are discarded as soon as the end of the block was reached, and since a \go command was not contained within the block, no additional errors were generated.
Thus, the correct way to write the above expression would be:
1> /*
2> ** PROPER USAGE OF IF BLOCK
3> */
4> \if [ $x -gt 10 ]
5> select count(*) from sysobjects
6> go
7> \else
8> select count(*) from sysindexes
9> go
10> \fi
or, even:
1> /*
2> ** PROPER USAGE OF IF BLOCK
3> */
4> \if [ $x -gt 10 ]
5> \set table_name=sysobjects
6> \else
7> \set table_name=sysindexes
8> \fi
4> select * from $table_name
5> go
Also, note that the line number displayed in the sqsh prompt resets to the current position in the outer SQL buffer after reaching the \fi terminator.
Expressions
All flow-of-control statements in sqsh take an expression to determine which block of code to execute. Just like UNIX's Bourne Shell, this expression is simply an operating system program that is executed by sqsh. If the command returns a success status (calls exit(0)), then it is considered successful.
For example, with following statement:
\while test $x -lt 10
block
\done
will execute the contents of block while the current value of $x is less than 10. Note that 'test' is a standard UNIX program to perform basic string or numeric comparisons (among other things). Also, unlike many shells, sqsh has no built-in version of 'test'.
Sqsh does, however, support the standard short form of 'test':
\while [ $x -lt 10 ]
block
\done
With this expression the open brace ('[') is replaced by the sqsh parser with 'test', and the close brace (']') is discarded.
Unsupported Expressions
Currently sqsh does not support the standard shell predicate operators '&&' and '||'. These can be performed like so:
\if sh -c "cmd1 && cmd2"
block
\done
\if statement
The \if command performs conditional execution of a sqsh block based upon the outcome of a supplied expression:
\if expr1
block1
\elif expr2
block2
\else
block3
\fi
In this example, if expression expr1 evaluates to true, then the block block1 is evaluated. Otherwise, if the expression expr2 evaluates to true, then block block2 is evaluated. Finally, if all other tests fail block3 is evaluated.
Note that, unlike Bourne Shell, every \if command must be accompanies by a trailing \fi statement. Also the sqsh parser is not terribly intelligent: The \else and \fi statements must be the only contents on the line in which they appear, and they may not be aliased to another name.
\while statement
The \while command executes a block of sqsh code for the while a supplied expression remains true.
\while expr
block
\done
In this example, while the expression expr evaluates to true, then the block block is evaluated.
The \break statement may be used to break out of the inner-most \while or \for loop (more on \for below).
\for statement
The \for command executes a block of sqsh code for each word supplied:
\for var in word ...
block
\done
For each word supplied, the value of the variable $var is set to the word and the block of code is executed. Execution ends when there are no more words in the list.
As with \while the \break statement may be used to break out of the inner-most execution loop.
\do command
The \do command is kind of a cross between a statement and a command.
It is a form of \go (see below for details on the \go command) in which a block of sqsh code may be executed for each row of data returned from the query. When the block is executed, special sqsh variables #[0-9]+ (a hash followed by a number) may be used to reference the values in the returned query. For example the following command:
select dbid, name from master..sysdatabases
\do
\echo "Checkpointing database #2, dbid #1"
use #2
go
checkpoint
go
\done
would cause a CHECKPOINT command to be issued in each database on the server.
Command line options
The \do command establishes a new connection to be used by the block of code when executed. By default, this connection is established to the current server (the current setting of $DSQUERY), using the current username ($username) and the current password ($password). This behavior may, however, be overridden using command line options:
Column variables
As mentioned above, the values of the columns in the current result set may be determined using the special #[0-9]+ variables. Thus, the variable #1 would contain the value of column number one of the current result set, and #122 could contain the value of the 122'nd column (column numbers begin at 1).
In the case of nested \do loops, values in previous nesting levels may be referred to by simply appending an addition '#' for each previous nesting level, like so:
select id, name from sysobjects
\do
select name, indid from sysindexes where id = #1
\do
\echo "Table ##2 (objid ##1) has index #1"
\done
\done
obviously, this isn't the way you would do this query in real life, but you get the idea.
When expanding columns with NULL values, the column variable will expand to an empty string (''). Also, references to non-existent columns, such as #0, will result in an empty string ('').
As with regular sqsh variables (those referenced with a '$'), column variables will not be expanded when contained within single quotes.
Aborting
If the \break or \return commands are issued during the processing of a \do loop, the current query will be canceled, the connection used by the loop will be closed (unless the -n flag was supplied) and the \do loop will abort.
\func command
The \func command is used to define a reusable block of sqsh code as a function. Functions are defined like so:
\func stats
\if [ $# -ne 1 ]
\echo "use: stats [on | off]"
\return 1
\fi
set statistics io ${1}
set statistics time ${1}
go
\done
In this example a new function is established called stats that expects a single argument, either "on" or "off". Using this argument, stats will enable or disable time-based and I/O-based statistics.
Once established, the function may be called like so:
\call stats on
Causing all instances of ${1} to be replaced with the first command line argument to stats.
Command line options
Currently only one command line argument is available to the \func command.
Function variables
As shown in the example above, several special variables are available for use within the body of the function. These are:
Return value
A value may be returned from a function via the \return command.
Like so:
\return N
Where N is a positive value. This return value is available to the caller of the function via the $? variable. As convention, a return value of 0 is used to indicate a success.
If \return is not explicitly called, the default return value is the current value of the $? variable (which is set to 0 upon entry of the function). Thus, if any SQL statements are invoked within the function, the default return value of $? will be the last error code returned during the processing of the SQL statement.
Starting with version 2.1.6, sqsh provides the same command line options as isql to handle Kerberos network authentication.
In version 2.1.5 experimental Kerberos support was added using the -K and -R options. -K was merely a switch to set Kerberos on. In sqsh 2.1.6 a more advanced implementation of network authentication is introduced, although still experimental.
By using the parameters -K, -R, -V, -Z you can make use of your defined network security settings (libtcl.cfg). The named options are identical to the ones defined for isql.
For example, connecting to a server using Kerberos (which happens to be the default, i.e. first entry in libtcl.cfg [SECURITY] tab in this example):
~$ sqsh -SSYB1502 -Uuser1 -RFC6A1502 -Z
\connect: Network authenticated session expires at:
16 Feb 2010 15:28:39 (11764 secs)
SYB1502.user1.master.1> select @@servername,@@authmech,
show_sec_services();
---------------- ----------- ----------------------------------
FC6A1502 kerberos unifiedlogin delegation mutualauth
integrity confidentiality
detectreplay detectseq
Note that the real name of the server (@@servername) differs from the server name in the interfaces file, so we have to specify the principal name through the -R parameter. When you do not specify the -V parameter together with -Z, all available security options will be enabled. When -V is specified without any security service options, only option u for Network Authentication will be implicitly set and the default security mechanism will be used if -Z is not specified.
[user1@linux-fc6a ~]$ sqsh -SFC6A1502 -Uuser1 -V
Open Client Message
Layer 7, Origin 9, Severity 5, Number 1
ct_connect(): security service layer: internal security control
layer error:
Security service provider internal error -1765328352
occurred.
[user1@linux-fc6a ~]$ kinit
Password for user1@LOCALDOMAIN:
[user1@linux-fc6a ~]$ sqsh -SFC6A1502 -Uuser1 -V
\connect: Network authenticated session expires at:
16 Feb 2010 15:28:39 (10964 secs)
FC6A1502.user1.master.1>
When the connection succeeds, sqsh will store the real name of the security mechanism in the variable $secmech. For example: "\echo $secmech" may show csfkrb5. The parameter -V takes a list of characters from the possible values of bcdimoqru. The option u enables Network Authentication, is the default and will allways be set when using -V or -Z, specified or not.
b - chanbinding : Channel binding
c - confidentiality : Data confidentiality service
d - delegation : Allow delegated credentials
i - integrity : Data integrity service
m - mutualauth : Mutual authentication for connection
establishment
o - dataorigin : Data origin stamping service
q - detectseq : Out-of-sequence detection
r - detectreplay : Data replay detection
u - unifiedlogin : Network Authentication
Please check master.dbo.syssecmechs for available services. Non-existing or not supported services supplied with -V are silently ignored. If you specify -V and/or -Z, sqsh assumes network authentication is tried and no password is required.
If you have a network authenticated connection and want to \reconnect using normal ASE authentication with username and password, you have to reset the network authentication variables by specifying -Znone
For example:
[user1@linux-fc6a ~]$ sqsh -SFC6A1502 -Uuser1 -V
\connect: Network authenticated session expires at:
16 Feb 2010 15:28:39 (10764 secs)
FC6A1502.user1.master.1> \echo $secmech csfkrb5
FC6A1502.user1.master.1> \reconnect -SASE1502 -Usa -Psybase
Open Client Message
Layer 7, Origin 9, Severity 5, Number 8
ct_connect(): security service layer: internal security control
layer error:
Consistency checks performed on the credential failed
(minor status 0).
FC6A1502.user1.master.1> \reconnect -SASE1502 -Usa -Psybase -Znone
ASE1502.sa.master.1>
The first \reconnect fails because sqsh still wants to try network authentication. However, no user principal for 'sa' exists and no ticket is set and thus the connection fails. The second \reconnect succeeds as the -Znone option reset appropriate variables. If the Kerberos ticket is renewed with kinit or any other client tool, the sqsh session must perform a \reconnect to refresh the credentials and to prevent a premature session abort. With the command \snace you can request for the session expiration interval. Depending on the security services that are set, the database connection may be closed without warning as soon as the ticket expires.
See chapter 16 "External Authentication" from the Sybase System Administration Guide volume 1 for more information on Kerberos network authenticationi, e.a.
The read-eval-print loop is the heart of the sqsh system and is responsible for prompting a user for input and determining what should be done with it. Typically this loop is for internal use only, however they are open to the user because there are some creative things that can be done with them.
\loop completes when all input has been depleted (end-of-file is encountered) or when a command, such as \exit requests that \loop exit.
Database Access
Given the size and complexity of sqsh (just look at the length of this man page), it is amazing how few database manipulation commands that there actually are. The following are commands that affect or use the current database connection:
Using this flag is identical to setting the $database variable prior to establishing the connection.
See also the discussion on Kerberos support.
All arguments that are accepted by \connect are also accepted by \reconnect (in fact \reconnect uses \connect to establish the new connection).
For example: \run -p -i ~/tmp/runtst.sqsh 10 -m pretty
If the Work Buffer is empty and the $repeat_batch variable is set to "On", \go will attempt to re-run the last command executed (this will only work in interactive mode if history support is enabled).
\go accepts the following arguments:
The nitty-gritty details of \bcp go like this: First the current SQL batch is expanded (unless the $expand variable is set to 0) and shipped off to the database for processing. If all goes well, a new connection is established to the destination database (as specified via $DSQUERY or the -S flag) to transfer the result set using bcp. Then, the output of the source database connection is bound to the new bcp connection and data transfer is performed. \bcp can handle multiple result sets without any problem (including result sets returned from stored procedures, etc.) provided that all of the result sets are valid for the destination table.
The equivalent of a "bcp out" may be performed using the bcp display style setting and file redirection (see the $style variable).
1> select * from proddb..materials
2> \bcp -SDTA -i "truncate table testdb..materials" -N -X testdb..materials
\rpc invokes the remote procedure rpc_name with one or more parameters that may be named (using @var) or anonymous (by not supplying a name). Unfortunately, due to the fact that Sybase's implementation of RPC's, does not directly support most implicit data type conversions (mainly between VARCHAR (the string you supply on the command line) and the most other data types (that the remote procedure is expecting), the syntax for the \rpc command is somewhat complex. However, in short here is how things work:
As the \rpc command line is being parsed, sqsh attempts to guess the data type of the parameter value based on the format (for example if it contains only digits, it is assumed to be an integer), sqsh then performs an explicit data type conversion prior to calling the remote procedure call. If sqsh guesses wrong, several flags are supplied to force it to perform the correct data type conversion (see parm_opt).
Display Options
The following options may be supplied anywhere on the command line and are used to affect the manner in which the result set(s) returning from the remote procedure call are displayed:
Parameter Options
The following options may be supplied immediately prior to specifying a parameter value and are used to affect the way in which sqsh interprets the contents of the value prior to calling the remote procedure. Although sqsh will allow any combination of these parameters to be combined, it only really makes sense to combine the -x flag with any other flag.
The following commands may be used to create, destroy, or manipulate the various buffers described in the BUFFERS section, above.
By default, \buf-edit uses the environment variable $EDITOR first, followed by $VISUAL to determine which editor to use, defaulting to 'vi' if the variable is not set.
It is important to note that as of release 1.2, \buf-edit is no longer able to use the name of an alias to it as the name of the editor to launch. This is primarily due to the change in the behavior of alias (see section Aliasing, below, for details).
The commands edit vi and emacs are automatically established upon startup of sqsh for backward compatibility with isql.
\buf_append !. buffer
The following command(s) are used to manipulate the contents of internal variables and environment variables.
The following commands are used to view the status of, or manipulate background jobs that are currently running, these correspond roughly to the commands supplied by such shells as csh(1).
As of release 1.2, sqsh supports full csh-style command aliasing. With this feature, sqsh checks the first word of each line, prior to any form of expansion, to see if it matches the name of an existing alias. If it does, the command is reprocessed with the alias definition replacing its name. Unlike csh, however, only one form of history substitution is available within an alias: the '!*' entry, indicating the current line being expanded. If no history expansion is called for, the arguments on the command line remain unchanged.
Like csh, aliases are not recursively expanded, so it is perfectly legal to create an alias that expands to a command by the same name.
The following command is used to create an alias:
After defining the new alias, whenever sqsh encounters a line beginning with alias_name, the remainder of the line is replaced with alias_body before any further processing is performed.
If the string '!*' exists anywhere within alias_body, the arguments supplied to the alias are inserted at that point, otherwise the argument are appended to the end of the alias definition. For example:
1> \alias hi='\echo !* said hello'
1> hi Scott
Scott said hello
where as if the alias does not include the !* keyword, then it behaves like so:
1> \alias hi='\echo said hello'
1> hi Scott
said hello Scott
It is perfectly legal to include a !* more than once within a given alias_body. Currently there is no way to escape the string !*, if you really need this feature send me mail.
The left over commands.
The following aliases are established upon startup of sqsh, and are provided primarily for backward compatibility with isql. These may be removed at any time using the \unalias command (either at the prompt, or within your .sqshrc file).
In-Line \go
If the variable $semicolon_hack is set to 1 (on), then sqsh supports what is called an in-line \go feature. This allows the current command batch to be terminated and sent to the database in a single step by appending a ';' onto the end of the current work buffer. This allows
1> sp_who;
To behave in the same manner as if you had typed:
1> sp_who
2> \go
Likewise, anything following the semicolon is passed to the \go command just as if it was run as a normal command:
1> sp_who ; 2>/dev/null | more
Unlike most other isql replacements, sqsh attempts to be smart about the semicolons. If a semicolon is contained within a set of single or double quotes it will not be interpreted. This includes multiple quotes. For example:
1> select "This is a multiple line
2> quote; it is smart!" ;
In the above example, only the second semicolon (the one at the end of the line) will be interpreted. The variable $semicolon_cmd is a string that contains the command that will be substituted by the semicolon which is \go by default. But you can change that to \bcp for example and execute:
1> select * from pubs2..titles; -S... tempdb..titles
In sqsh-2.2.0 you can also set variable $semicolon_hack2 to allow multiple commands on one line to be fired by a semicolon. If this option is set you cannot use the construct above to pass additional parameters to the \go command. But of course you can change $semicolon_cmd to your needs as this command will be used to execute SQL batches.
echo "exec sp_who;exec sp_helpdb;\echo Done;" | sqsh -S... -U... -P...
In this example the semicolon acts as a \go command to a SQL buffer and as a command separator for sqsh commands that will be executed in sequence. Note that $semicolon_hack2 takes precedence over $semicolon_hack.
There are several options that are configurable via the command line options to sqsh, however these are by no means complete. There are many aspects of sqsh's behavior that may only be modified by setting special variables. (In fact, the command line options really only set these variables for you).
Next to all of the variables that follow is the type of data with which they may be set. Any attempts to set the variable with a type of data that it does not accept will fail.
Variables
The following variables have special meanings within sqsh and the setting of these variables alter the behavior of the shell.
If $autouse is set, and the $database variable has not been set, then this variable causes \connect to perform a "use $autouse" once a connection has been established.
This variable may also be set using the -D command line option.
A batch is considered failed whenever an error of severity $thresh_fail is encountered. When $batch_failcount reaches $thresh_exit sqsh exits with an exit value of the total number of batches that have failed. Setting $batch_failcount to the string "" will cause it to reset to zero, any other value may have unpredictable results.
See EXIT STATUS for details.
Note that this features relies upon the operating system specific locale information and the setting of $localeconv for determining such things as the name of the month and day, rather than going through the CT-Lib locale information. This means that the date format could potentially miss-match the locale as requested using the -z flag. For example, if sqsh is run on an operating system configured for US English, but requests French as the language of choice using -z, the use of $datetime will cause all date information to be displayed in US English rather than French.
Ordinary characters defined in the variable are left in place without any conversion. Characters introduced by a '%' character are replaced during display of a column value according to the definitions in the strftime manual page.
1> \set datetime='%e %b %Y %H:%M[:%S.%q]%p'
1> select convert(bigdatetime,getdate())
2> select convert(smalldatetime,getdate())
3> go
-----------------------------
22 Jul 2013 13:26:52.938000PM
-------------------
22 Jul 2013 13:27PM
LINUX1502.user1.master.1> \history
(1) sp_who
(2) grant role mon_role to sa_role
(3) select * from monProcessActivity
(4) select @@authmech,show_sec_services()
(5) select @@servername,@@authmech,show_sec_services()
LINUX1502.user1.master.1> sp_who
LINUX1502.user1.master.2> go
... output omitted
LINUX1502.user1.master.1> \history
(1) grant role mon_role to sa_role
(2) select * from monProcessActivity
(3) select @@authmech,show_sec_services()
(4) select @@servername,@@authmech,show_sec_services()
(5) sp_who
sp_who is the last executed command and the buffer - originally the last in the list - is now on top of the list. When an already existing buffer is reused, the value of the $histnum variable is not changed.
CTRL-D: Use "exit" or "quit" to leave the sqsh shell.
This is equivalent to using "set -o ignoreeof" in the bash shell.
See also the discussion on Kerberos Support.
SYBASE.sa.master.1> \lcd $SYBASE/$SYBASE_ASE/scripts
\lcd: local directory changed to: /opt/sybase/ASE-15_0/scripts
SYBASE.sa.master.1> \run -n -i ./instm<TAB>
results in the completed filename "./instmsgs.ebf".
SYBPROD.sa.tempdb.1> select * from master..sysdatabases d,
SYBPROD.sa.tempdb.2> master.dbo.sysusages u
SYBPROD.sa.tempdb.3> where d.<TAB><TAB>
d.audflags d.def_remote_loc d.durability d.status2
d.audflags2 d.def_remote_type d.logptr d.status3
d.crdate d.deftabaud d.name d.status4
d.dbid d.defvwaud d.spare d.suid
d.defpraud d.dumptrdate d.status d.version
SYBPROD.sa.tempdb.3> where d.
This works both for the T-SQL join syntax and the ANSI inner, outer, left and right join syntax. Note that sqsh is now able to perform cross database auto-completion as well.
select name from sysobjects order by name
But you can supply a different query that suits your needs even better. E.g.
\set keyword_query="\\
select name from sysobjects \\
where type in ('U','V','P','S') \\
union \\
select name from sybsystemprocs..sysobjects \\
where type='P' \\
order by name"
This feature is controlled by the variables $keyword_completion and $keyword_dynamic and is only available if GNU Readline support has been compiled into sqsh.
~$ export LANG=nl_NL.utf8
~$ locale -ck LC_TIME
LC_TIME
abday="zo;ma;di;wo;do;vr;za"
day="zondag;maandag;dinsdag;woensdag;donderdag;vrijdag;zaterdag"
abmon="jan;feb;mrt;apr;mei;jun;jul;aug;sep;okt;nov;dec"
mon="januari;februari;maart;april;mei;juni;juli;augustus;september;oktober;
november;december"
am_pm=";"
...
~$ sqsh -Llocaleconv=On -Ldatetime="%A %e %B %Y" -C"select getdate()"
---------------------------
maandag 22 juli 2013
(1 row affected)
~$ sqsh -Llocaleconv=Off -Ldatetime="%A %e %B %Y" -C"select getdate()"
---------------------------
Monday 22 July 2013
(1 row affected)
Also, depending on the locale, the decimal specifier in money, numeric, decimal, float and real datatypes may be displayed as a ",". Note that sqsh does not take the thousands_sep into account when displaying numerical data values.
Query or command timeout detected, session aborted
The client connection has detected this x time(s)
Aborting on max_timeout limit
SYBASE.sa.tempdb.1> \set p2fname=/tmp/sqsh_p2f.out
SYBASE.sa.tempdb.1> \set p2faxm=10
SYBASE.sa.tempdb.1> create clustered index keys_cx on keys(id)
SYBASE.sa.tempdb.2> with ignore_dup_row
SYBASE.sa.tempdb.2> go
Warning: deleted duplicate row. Primary key is '0'
Warning: deleted duplicate row. Primary key is '1'
Warning: deleted duplicate row. Primary key is '2'
Warning: deleted duplicate row. Primary key is '3'
Warning: deleted duplicate row. Primary key is '4'
Warning: deleted duplicate row. Primary key is '5'
Warning: deleted duplicate row. Primary key is '6'
Warning: deleted duplicate row. Primary key is '7'
Warning: deleted duplicate row. Primary key is '8'
Warning: deleted duplicate row. Primary key is '9'
Warning: Number of printed server messages exceeds p2faxm=10 limit for current batch.
Remaining server messages will be printed to file: /tmp/sqsh_p2f.out
See also the discussion on Kerberos Support.
\set readline_histignore='go:lo:mo:exit:quit:vi:!!:GO'
or
\set readline_histignore='"RE:^[glm]o$|^cd |^exit$|^quit$|^vi$"'
The regular expression in the second example starts with RE: and it is recommended to supply the string between quotes and double quotes to prevent expansion and display problems due to the pipe characters. The regular expression is evaluated case insensitive.
See also the paragraph on Kerberos Support.
The vert (or vertical) style rotates the output, so that every line is represented by a column name followed by a column value. This is nice for looking at particularly wide output.
The bcp style displays results in a format amenable to bcp'ing the result set back into another table. That is, every column value is separated by $bcp_colsep with the final column separated by $bcp_rowsep followed by a newline (\n). If $bcp_colsep or $bcp_rowsep are not defined then '|' is used as the default separator. Note that this output does not work well with COMPUTE columns, and uses the default conversion methods for all data types (that is, datetime columns may truncate the millisecond).
The csv display style outputs all result sets in the form of a comma separated construct. This mode is ideal to import result sets into spreadsheet programs, for instance.
The html display style outputs all result sets in the form of an HTML <TABLE> construct. This mode is ideal for the use of sqsh as a CGI application.
The meta display style outputs only the meta-data information associated with the result and discards the actual row results. This mode is useful for debugging the result sets generated from a full passthru Open Server gateway, or for those interested in what is really coming back from the server.
The pretty display style generates a fluffy table-like output using regular ASCII characters for borders. This mode does not perform any explicit column wrapping, like the horiz display mode. However, the $colwidth variable can be used to control the maximum width of a given column on the screen. If the column exceeds $colwidth characters wide, it is wrapped in a relatively visually appealing manner. Note that $colwidth may be exceeded if there is enough screen width to hold the columns without wrapping.
The none display style suppresses all results from being displayed (however it does actually retrieve result information from the SQL Server). This is particularly useful when used with the -p flag (or the $statistics variable) for gathering accurate performance statistics.
sqsh: ERROR: Unable to use database '...' in batch mode
This is to prevent that a SQL script is inadvertently executed in the wrong database, usually the default database of the login (master for example).
As with most shells, sqsh allows a file containing SQL and script commands to be executed directly via the magical UNIX #! convention. On most UNIX platforms, when the operating system encounters the bytes #! as the first two bytes of an executable file it will automatically pipe the file through the interpreter specified immediately after the #!. For example, to create an executable sqsh script to run sp_who, you simply need to create a file like so:
#!/usr/bin/sqsh -i
sp_who
go
Thus, if your sp_who script is executed directly, it will automatically launch "/usr/bin/sqsh -i sp_who" for you.
And, to make things even more flexible, sqsh supports positional parameters, similar to most shells, of the form ${n} which will expand to the nth argument to your sqsh script. For example:
#!/usr/bin/sqsh -i
sp_who ${1}
go
will cause the sp_who stored procedure to be executed with an argument of the first command line parameter supplied to the sp_who shell script.
Note that positional parameters must be contained between braces to avoid conflicts with the T-SQL money data type (without the braces, the variable will not be expanded).
One of the major complaints of isql is that it provides no facility to detect when an error condition occurred while it is performing processing. sqsh provides a rather complex, but flexible mechanism for returning meaningful information concerning its reason for exit in the form of an exit status (see exit(3)).
When sqsh begins execution two handlers are associated with the current connection to the database, one is a message handler which is responsible for displaying the text of any SQL Server messages or errors, and the other is an error handler, which is responsible for determining what to do with an error condition (bear with me, these are only loose descriptions). And, associated with each message and error condition is a severity level, between 0 and 22 (informational message to fatal condition).
Associated with these two message handlers are several variables that are used to either control their behavior, or are used as indicators by the message handler:
Note that, unless $exit_failcount is set to 1, sqsh will exit with 0 if the total number of failures does not reach $thresh_exit.
To recap, here is a list of error codes that may be returned by sqsh upon exit, and the reason that they could be returned:
The following sections provide detailed examples of combinations of variable settings and the results produced upon exit with certain failure conditions. It is assumed the $exit_value variable contains 0 in the next examples.
Color codes are presented as a string like {1;2;3}. If sqsh encounters a { (curly brace) in the prompt string it will assume a color-code is supplied and will act as such. No checks will be performed on the validity of the color-code definition itself. The color definition consists of three values separated by a semicolon.
The first code defines the Color Attribute Code with possible values: 0=none 1=bold The second value defines the Text Color Code: 30=black 31=red 32=green 33=yellow 34=blue 35=magenta 36=cyan 37=white The third value defines the Background Color Code: 40=black 41=red 42=green 43=yellow 44=blue 45=magenta 46=cyan 47=white The last color-code for the background may be omitted. Not all color and attribute combinations will present good results, depending on your terminal type and color scheme. But the following values work OK on a xterm with a creamy white background color in Linux as well as rxvt in Cygwin with a black background window:
Prompt: Blue text in white background "{0;34;47}"
Command text: Yellow text "{0;33}"
Command text: Default text color "{0}"
sqsh will translate the color-codes to an actual color-code string that is presented to readline: for example "\001\033[0;36;47m\002". Note that if you want to use curly brackets in your prompt, you have to escape colorization by specifying a double brace, like {{...}}. For example:
\set prompt_color='{0;34;47}'
\set text_color='{0}'
\set prompt='$prompt_color{{$DSQUERY.$username.$database.$lineno}}>$text_color '
Color support is automatically available if sqsh is compiled and linked with readline support enabled.
Defaults: $HOME/.sqshrc, $HOME/.sqsh_session, $HOME/.sqsh_history, $HOME/.sqsh_readline, $HOME/.sqsh_words, $tmp_dir/sqsh-dfr.*, $tmp_dir/sqsh-edit.* These can all be modified using the internal sqsh variables described above.
The addition of flow-of-control expressions has extended sqsh way beyond the scope of its original design, and it is quite obvious from using the features they are hacked in and are rather clunky (although still quite usable). As a result, the processing of these expressions is rather slow (when compared to bourne shell), and the error reporting doesn't lend itself to debugging large scripts. The development of 1000+ line scripts is discouraged.
The combination of backgrounding and pipes does not work properly right now. What happens is, when a background job is run that incorporates a pipe-line, sqsh will suspend until the job is complete, which is obviously not what you desire. To test this, try the following:
1> select * from syscolumns
2> go | grep id &
You will find that you do not get your prompt back until the job completes. The same is more ore less true for pipes being used with the \do command:
1> select name from master..sysdatabases
2> \do | grep syb
This will not give you a prompt anymore until the block completes with \done. So you are typing the do block itself in the dark and there is no possiblity to edit a do block or retrieve a do block from the history list. So you may have typed (hopefully without syntax errors):
\echo #1
\done
After \done is processed the results will be displayed followed by a fresh prompt.
Please report any other bugs to http://sourceforge.net/p/sqsh/bugs Feature requests may be posted to http://sourceforge.net/p/sqsh/feature-requests
| 2014-03-12 | 2.5 |