Starting servers with the JupyterHub API#
Sometimes, when working with applications such as BinderHub, it may be necessary to launch Jupyter-based services on behalf of your users. Doing so can be achieved through JupyterHub’s REST API, which allows one to launch and manage servers on behalf of users through API calls instead of the JupyterHub UI. This way, you can take advantage of other user/launch/lifecycle patterns that are not natively supported by the JupyterHub UI, all without the need to develop the server management features of JupyterHub Spawners and/or Authenticators.
This tutorial goes through working with the JupyterHub API to manage servers for users. In particular, it covers how to:
At the end, we also provide sample Python code that can be used to implement these steps.
Checking server status#
First, request information about a particular user using a GET request:
GET /hub/api/users/:username
The response you get will include a servers
field, which is a dictionary, as shown in this JSON-formatted response:
Required scope: read:servers
{
"admin": false,
"groups": [],
"pending": null,
"server": null,
"name": "test-1",
"kind": "user",
"last_activity": "2021-08-03T18:12:46.026411Z",
"created": "2021-08-03T18:09:59.767600Z",
"roles": ["user"],
"servers": {}
}
Many JupyterHub deployments only use a ‘default’ server, represented as an empty string ''
for a name. An investigation of the servers
field can yield one of two results. First, it can be empty as in the sample JSON response above. In such a case, the user has no running servers.
However, should the user have running servers, then the returned dict should contain various information, as shown in this response:
"servers": {
"": {
"name": "",
"last_activity": "2021-08-03T18:48:35.934000Z",
"started": "2021-08-03T18:48:29.093885Z",
"pending": null,
"ready": true,
"url": "/user/test-1/",
"user_options": {},
"progress_url": "/hub/api/users/test-1/server/progress"
}
}
Key properties of a server:
- name
the server’s name. Always the same as the key in
servers
.- ready
boolean. If true, the server can be expected to respond to requests at
url
.- pending
null
or a string indicating a transitional state (such asstart
orstop
). Will always benull
ifready
is true or a string if false.- url
The server’s url path (e.g.
/users/:name/:servername/
) where the server can be accessed ifready
is true.- progress_url
The API URL path (starting with
/hub/api
) where the progress API can be used to wait for the server to be ready.- last_activity
ISO8601 timestamp indicating when activity was last observed on the server.
- started
ISO801 timestamp indicating when the server was last started.
The two responses above are from a user with no servers and another with one ready
server. The sample below is a response likely to be received when one requests a server launch while the server is not yet ready:
"servers": {
"": {
"name": "",
"last_activity": "2021-08-03T18:48:29.093885Z",
"started": "2021-08-03T18:48:29.093885Z",
"pending": "spawn",
"ready": false,
"url": "/user/test-1/",
"user_options": {},
"progress_url": "/hub/api/users/test-1/server/progress"
}
}
Note that ready
is false
and pending
has the value spawn
, meaning that the server is not ready and attempting to access it may not work as it is still in the process of spawning. We’ll get more into this below in waiting for a server.
Starting servers#
To start a server, make this API request:
POST /hub/api/users/:username/servers/[:servername]
Required scope: servers
Assuming the request was valid, there are two possible responses:
- 201 Created
This status code means the launch completed and the server is ready and is available at the server’s URL immediately.
- 202 Accepted
This is the more likely response, and means that the server has begun launching, but is not immediately ready. As a result, the server shows
pending: 'spawn'
at this point and you should wait for it to start.
Waiting for a server to start#
After receiving a 202 Accepted
response, you have to wait for the server to start.
Two approaches can be applied to establish when the server is ready:
Polling the server model#
The simplest way to check if a server is ready is to programmatically query the server model until two conditions are true:
The server name is contained in the
servers
response, andservers['servername']['ready']
is true.
The Python code snippet below can be used to check if a server is ready:
def server_ready(hub_url, user, server_name="", token):
r = requests.get(
f"{hub_url}/hub/api/users/{user}/servers/{server_name}",
headers={"Authorization": f"token {token}"},
)
r.raise_for_status()
user_model = r.json()
servers = user_model.get("servers", {})
if server_name not in servers:
return False
server = servers[server_name]
if server['ready']:
print(f"Server {user}/{server_name} ready at {server['url']}")
return True
else:
print(f"Server {user}/{server_name} not ready, pending {server['pending']}")
return False
You can keep making this check until ready
is true.
Using the progress API#
The most efficient way to wait for a server to start is by using the progress API.
The progress URL is available in the server model under progress_url
and has the form /hub/api/users/:user/servers/:servername/progress
.
The default server progress can be accessed at :user/servers//progress
or :user/server/progress
as demonstrated in the following GET request:
GET /hub/api/users/:user/servers/:servername/progress
Required scope: read:servers
The progress API is an example of an EventStream API. Messages are streamed and delivered in the form:
data: {"progress": 10, "message": "...", ...}
where the line after data:
contains a JSON-serialized dictionary.
Lines that do not start with data:
should be ignored.
Progress events have the form:
{
"progress": 0-100,
"message": "",
"ready": True, # or False
}
- progress
integer, 0-100
- message
string message describing progress stages
- ready
present and true only for the last event when the server is ready
- url
only present if
ready
is true; will be the server’s URL
The progress API can be used even with fully ready servers. If the server is ready, there will only be one event, which will look like:
{
"progress": 100,
"ready": true,
"message": "Server ready at /user/test-1/",
"html_message": "Server ready at <a href=\"/user/test-1/\">/user/test-1/</a>",
"url": "/user/test-1/"
}
where ready
and url
are the same as in the server model, and ready
will always be true.
A significant advantage of the progress API is that it shows the status of the server through a stream of messages. Below is an example of a typical complete stream from the API:
data: {"progress": 0, "message": "Server requested"}
data: {"progress": 50, "message": "Spawning server..."}
data: {"progress": 100, "ready": true, "message": "Server ready at /user/test-user/", "html_message": "Server ready at <a href=\"/user/test-user/\">/user/test-user/</a>", "url": "/user/test-user/"}
Here is a Python example for consuming an event stream:
def event_stream(session, url):
"""Generator yielding events from a JSON event stream
For use with the server progress API
"""
r = session.get(url, stream=True)
r.raise_for_status()
for line in r.iter_lines():
line = line.decode('utf8', 'replace')
# event lines all start with `data:`
# all other lines should be ignored (they will be empty)
if line.startswith('data:'):
yield json.loads(line.split(':', 1)[1])
Stopping servers#
Servers can be stopped with a DELETE request:
DELETE /hub/api/users/:user/servers/[:servername]
Required scope: servers
Similar to when starting a server, issuing the DELETE request above might not stop the server immediately. Instead, the DELETE request has two possible response codes:
- 204 Deleted
This status code means the delete completed and the server is fully stopped. It will now be absent from the user
servers
model.- 202 Accepted
This code means your request was accepted but is not yet completely processed. The server has
pending: 'stop'
at this point.
There is no progress API for checking when a server actually stops.
The only way to wait for a server to stop is to poll it and wait for the server to disappear from the user servers
model.
This Python code snippet can be used to stop a server and the wait for the process to complete:
def stop_server(session, hub_url, user, server_name=""):
"""Stop a server via the JupyterHub API
Returns when the server has finished stopping
"""
# step 1: get user status
user_url = f"{hub_url}/hub/api/users/{user}"
server_url = f"{user_url}/servers/{server_name}"
log_name = f"{user}/{server_name}".rstrip("/")
log.info(f"Stopping server {log_name}")
r = session.delete(server_url)
if r.status_code == 404:
log.info(f"Server {log_name} already stopped")
r.raise_for_status()
if r.status_code == 204:
log.info(f"Server {log_name} stopped")
return
# else: 202, stop requested, but not complete
# wait for stop to finish
log.info(f"Server {log_name} stopping...")
# wait for server to be done stopping
while True:
r = session.get(user_url)
r.raise_for_status()
user_model = r.json()
if server_name not in user_model.get("servers", {}):
log.info(f"Server {log_name} stopped")
return
server = user_model["servers"][server_name]
if not server['pending']:
raise ValueError(f"Waiting for {log_name}, but no longer pending.")
log.info(f"Server {log_name} pending: {server['pending']}")
# wait to poll again
time.sleep(1)
Communicating with servers#
JupyterHub tokens with the access:servers
scope can be used to communicate with servers themselves.
The tokens can be the same as those you used to launch your service.
Note
Access scopes are new in JupyterHub 2.0. To access servers in JupyterHub 1.x, a token must be owned by the same user as the server, or be an admin token if admin_access is enabled.
The URL returned from a server model is the URL path suffix,
e.g. /user/:name/
to append to the jupyterhub base URL.
The returned URL is of the form {hub_url}{server_url}
,
where hub_url
would be http://127.0.0.1:8000
by default and server_url
is /user/myname
.
When combined, the two give a full URL of http://127.0.0.1:8000/user/myname
.
Python example#
The JupyterHub repo includes a complete example in examples/server-api
that ties all theses steps together.
In summary, the processes involved in managing servers on behalf of users are:
Get user information from
/user/:name
.The server model includes a
ready
state to tell you if it’s ready.If it’s not ready, you can follow up with
progress_url
to wait for it.If it is ready, you can use the
url
field to link directly to the running server.
The example below demonstrates starting and stopping servers via the JupyterHub API, including waiting for them to start via the progress API and waiting for them to stop by polling the user model.
def event_stream(session, url):
"""Generator yielding events from a JSON event stream
For use with the server progress API
"""
r = session.get(url, stream=True)
r.raise_for_status()
for line in r.iter_lines():
line = line.decode('utf8', 'replace')
# event lines all start with `data:`
# all other lines should be ignored (they will be empty)
if line.startswith('data:'):
yield json.loads(line.split(':', 1)[1])
def start_server(session, hub_url, user, server_name=""):
"""Start a server for a jupyterhub user
Returns the full URL for accessing the server
"""
user_url = f"{hub_url}/hub/api/users/{user}"
log_name = f"{user}/{server_name}".rstrip("/")
# step 1: get user status
r = session.get(user_url)
r.raise_for_status()
user_model = r.json()
# if server is not 'active', request launch
if server_name not in user_model.get('servers', {}):
log.info(f"Starting server {log_name}")
r = session.post(f"{user_url}/servers/{server_name}")
r.raise_for_status()
if r.status_code == 201:
log.info(f"Server {log_name} is launched and ready")
elif r.status_code == 202:
log.info(f"Server {log_name} is launching...")
else:
log.warning(f"Unexpected status: {r.status_code}")
r = session.get(user_url)
r.raise_for_status()
user_model = r.json()
# report server status
server = user_model['servers'][server_name]
if server['pending']:
status = f"pending {server['pending']}"
elif server['ready']:
status = "ready"
else:
# shouldn't be possible!
raise ValueError(f"Unexpected server state: {server}")
log.info(f"Server {log_name} is {status}")
# wait for server to be ready using progress API
progress_url = user_model['servers'][server_name]['progress_url']
for event in event_stream(session, f"{hub_url}{progress_url}"):
log.info(f"Progress {event['progress']}%: {event['message']}")
if event.get("ready"):
server_url = event['url']
break
else:
# server never ready
raise ValueError(f"{log_name} never started!")
# at this point, we know the server is ready and waiting to receive requests
# return the full URL where the server can be accessed
return f"{hub_url}{server_url}"
def stop_server(session, hub_url, user, server_name=""):
"""Stop a server via the JupyterHub API
Returns when the server has finished stopping
"""
# step 1: get user status
user_url = f"{hub_url}/hub/api/users/{user}"
server_url = f"{user_url}/servers/{server_name}"
log_name = f"{user}/{server_name}".rstrip("/")
log.info(f"Stopping server {log_name}")
r = session.delete(server_url)
if r.status_code == 404:
log.info(f"Server {log_name} already stopped")
r.raise_for_status()
if r.status_code == 204:
log.info(f"Server {log_name} stopped")
return
# else: 202, stop requested, but not complete
# wait for stop to finish
log.info(f"Server {log_name} stopping...")
# wait for server to be done stopping
while True:
r = session.get(user_url)
r.raise_for_status()
user_model = r.json()
if server_name not in user_model.get("servers", {}):
log.info(f"Server {log_name} stopped")
return
server = user_model["servers"][server_name]
if not server['pending']:
raise ValueError(f"Waiting for {log_name}, but no longer pending.")
log.info(f"Server {log_name} pending: {server['pending']}")
# wait to poll again
time.sleep(1)