arguments
The command line arguments for the executable. Use
quotes, if a space is required in a single argument.
count
The number of executions of the executable. [Default:
<literal>1</literal>]
directory
Specifies the path of the directory the jobmanager will
use as the default directory for the requested job. [Default:
<literal>$(HOME)</literal>]
dry_run
If dryrun = yes then the jobmanager will not submit the
job for execution and will return success. [Default:
<literal>no</literal>]
environment
The environment variables that will be defined for the
executable in addition to default set that is given to the job by the
jobmanager.
executable
The name of the executable file to run on the remote
machine. If the value is a GASS URL, the file is transferred to the remote
gass cache before executing the job and removed after the job has
terminated.
expiration
Time (in seconds) after a a job fails to receive a
two-phase commit end signal before it is cleaned up. [Default:
<literal>14400</literal>]
file_clean_up
Specifies a list of files which will be removed after the
job is completed.
file_stage_in
Specifies a list of ("remote URL" "local
file") pairs which indicate files to be staged to the nodes which will
run the job.
file_stage_in_shared
Specifies a list of ("remote URL" "local
file") pairs which indicate files to be staged into the cache. A symlink
from the cache to the "local file" path will be made.
file_stage_out
Specifies a list of ("local file" "remote
URL") pairs which indicate files to be staged from the job to a
GASS-compatible file server.
gass_cache
Specifies location to override the GASS cache
location.
gram_my_job
Obsolete and ignored. [Default:
<literal>collective</literal>]
host_count
Only applies to clusters of SMP computers, such as newer
IBM SP systems. Defines the number of nodes ("pizza boxes") to
distribute the "count" processes across.
job_type
This specifies how the jobmanager should start the job.
Possible values are single (even if the count > 1, only start 1 process or
thread), multiple (start count processes or threads), mpi (use the appropriate
method (e.g. mpirun) to start a program compiled with a vendor-provided MPI
library. Program is started with count nodes), and condor (starts condor jobs
in the "condor" universe.) [Default:
<literal>multiple</literal>]
library_path
Specifies a list of paths to be appended to the
system-specific library path environment variables. [Default:
<literal>$(GLOBUS_LOCATION)/lib</literal>]
loglevel
Override the default log level for this job. The value of
this attribute consists of a combination of the strings FATAL, ERROR, WARN,
INFO, DEBUG, TRACE joined by the | character
logpattern
Override the default log path pattern for this job. The
value of this attribute is a string (potentially containing RSL substitutions)
that is evaluated to the path to write the log to. If the resulting string
contains the string $(DATE) (or any other RSL substitution), it will be
reevaluated at log time.
max_cpu_time
Explicitly set the maximum cputime for a single execution
of the executable. The units is in minutes. The value will go through an
atoi() conversion in order to get an integer. If the GRAM scheduler cannot set
cputime, then an error will be returned.
max_memory
Explicitly set the maximum amount of memory for a single
execution of the executable. The units is in Megabytes. The value will go
through an atoi() conversion in order to get an integer. If the GRAM scheduler
cannot set maxMemory, then an error will be returned.
max_time
The maximum walltime or cputime for a single execution of
the executable. Walltime or cputime is selected by the GRAM scheduler being
interfaced. The units is in minutes. The value will go through an atoi()
conversion in order to get an integer.
max_wall_time
Explicitly set the maximum walltime for a single
execution of the executable. The units is in minutes. The value will go
through an atoi() conversion in order to get an integer. If the GRAM scheduler
cannot set walltime, then an error will be returned.
min_memory
Explicitly set the minimum amount of memory for a single
execution of the executable. The units is in Megabytes. The value will go
through an atoi() conversion in order to get an integer. If the GRAM scheduler
cannot set minMemory, then an error will be returned.
project
Target the job to be allocated to a project account as
defined by the scheduler at the defined (remote) resource.
proxy_timeout
Obsolete and ignored. Now a job-manager-wide
setting.
queue
Target the job to a queue (class) name as defined by the
scheduler at the defined (remote) resource.
remote_io_url
Writes the given value (a URL base string) to a file, and
adds the path to that file to the environment through the GLOBUS_REMOTE_IO_URL
environment variable. If this is specified as part of a job restart RSL, the
job manager will update the file’s contents. This is intended for jobs
that want to access files via GASS, but the URL of the GASS server has changed
due to a GASS server restart.
restart
Start a new job manager, but instead of submitting a new
job, start managing an existing job. The job manager will search for the job
state file created by the original job manager. If it finds the file and
successfully reads it, it will become the new manager of the job, sending
callbacks on status and streaming stdout/err if appropriate. It will fail if
it detects that the old jobmanager is still alive (via a timestamp in the
state file). If stdout or stderr was being streamed over the network, new
stdout and stderr attributes can be specified in the restart RSL and the
jobmanager will stream to the new locations (useful when output is going to a
GASS server started by the client that’s listening on a dynamic port,
and the client was restarted). The new job manager will return a new contact
string that should be used to communicate with it. If a jobmanager is
restarted multiple times, any of the previous contact strings can be given for
the restart attribute.
rsl_substitution
Specifies a list of values which can be substituted into
other rsl attributes' values through the $(SUBSTITUTION) mechanism.
save_state
Causes the jobmanager to save it’s job state
information to a persistent file on disk. If the job manager exits or is
suspended, the client can later start up a new job manager which can continue
monitoring the job.
savejobdescription
Save a copy of the job description to $HOME [Default:
<literal>no</literal>]
scratch_dir
Specifies the location to create a scratch subdirectory
in. A SCRATCH_DIRECTORY RSL substitution will be filled with the name of the
directory which is created.
stderr
The name of the remote file to store the standard error
from the job. If the value is a GASS URL, the standard error from the job is
transferred dynamically during the execution of the job. There are two
accepted forms of this value. It can consist of a single destination: stderr =
URL, or a sequence of destinations: stderr = (DESTINATION) (DESTINATION). In
the latter case, the DESTINATION may itself be a URL or a sequence of an
x-gass-cache URL followed by a cache tag. [Default:
<literal>/dev/null</literal>]
stderr_position
Specifies where in the file remote standard error
streaming should be restarted from. Must be 0.
stdin
The name of the file to be used as standard input for the
executable on the remote machine. If the value is a GASS URL, the file is
transferred to the remote gass cache before executing the job and removed
after the job has terminated. [Default:
<literal>/dev/null</literal>]
stdout
The name of the remote file to store the standard output
from the job. If the value is a GASS URL, the standard output from the job is
transferred dynamically during the execution of the job. There are two
accepted forms of this value. It can consist of a single destination: stdout =
URL, or a sequence of destinations: stdout = (DESTINATION) (DESTINATION). In
the latter case, the DESTINATION may itself be a URL or a sequence of an
x-gass-cache URL followed by a cache tag. [Default:
<literal>/dev/null</literal>]
stdout_position
Specifies where in the file remote output streaming
should be restarted from. Must be 0.
two_phase
Use a two-phase commit for job submission and completion.
The job manager will respond to the initial job request with a
WAITING_FOR_COMMIT error. It will then wait for a signal from the client
before doing the actual job submission. The integer supplied is the number of
seconds the job manager should wait before timing out. If the job manager
times out before receiving the commit signal, or if a client issues a cancel
signal, the job manager will clean up the job’s files and exit, sending
a callback with the job status as GLOBUS_GRAM_PROTOCOL_JOB_STATE_FAILED. After
the job manager sends a DONE or FAILED callback, it will wait for a commit
signal from the client. If it receives one, it cleans up and exits as usual.
If it times out and save_state was enabled, it will leave all of the
job’s files in place and exit (assuming the client is down and will
attempt a job restart later). The timeoutvalue can be extended via a signal.
When one of the following errors occurs, the job manager does not delete the
job state file when it exits: GLOBUS_GRAM_PROTOCOL_ERROR_COMMIT_TIMED_OUT,
GLOBUS_GRAM_PROTOCOL_ERROR_TTL_EXPIRED, GLOBUS_GRAM_PROTOCOL_ERROR_JM_STOPPED,
GLOBUS_GRAM_PROTOCOL_ERROR_USER_PROXY_EXPIRED. In these cases, it can not be
restarted, so the job manager will not wait for the commit signal after
sending the FAILED callback
username
Verify that the job is running as this user.