10 Minutes to Dask

This is a short overview of Dask geared towards new users. There is much more information contained in the rest of the documentation.

Dask overview. Dask is composed of three parts: collections, task graphs, and schedulers.

High level collections are used to generate task graphs which can be executed by schedulers on a single machine or a cluster.

We normally import Dask as follows:

>>> import numpy as np
>>> import pandas as pd

>>> import dask.dataframe as dd
>>> import dask.array as da
>>> import dask.bag as db

Based on the type of data you are working with, you might not need all of these.

Creating a Dask Object

You can create a Dask object from scratch by supplying existing data and optionally including information about how the chunks should be structured.

See Dask DataFrame.

>>> index = pd.date_range("2021-09-01", periods=2400, freq="1h")
... df = pd.DataFrame({"a": np.arange(2400), "b": list("abcaddbe" * 300)}, index=index)
... ddf = dd.from_pandas(df, npartitions=10)
... ddf
Dask DataFrame Structure:
                        a       b
npartitions=10
2021-09-01 00:00:00  int64  object
2021-09-11 00:00:00    ...     ...
...                    ...     ...
2021-11-30 00:00:00    ...     ...
2021-12-09 23:00:00    ...     ...
Dask Name: from_pandas, 10 tasks

Now we have a Dask DataFrame with 2 columns and 2400 rows composed of 10 partitions where each partition has 240 rows. Each partition represents a piece of the data.

Here are some key properties of a DataFrame:

>>> # check the index values covered by each partition
... ddf.divisions
(Timestamp('2021-09-01 00:00:00', freq='H'),
Timestamp('2021-09-11 00:00:00', freq='H'),
Timestamp('2021-09-21 00:00:00', freq='H'),
Timestamp('2021-10-01 00:00:00', freq='H'),
Timestamp('2021-10-11 00:00:00', freq='H'),
Timestamp('2021-10-21 00:00:00', freq='H'),
Timestamp('2021-10-31 00:00:00', freq='H'),
Timestamp('2021-11-10 00:00:00', freq='H'),
Timestamp('2021-11-20 00:00:00', freq='H'),
Timestamp('2021-11-30 00:00:00', freq='H'),
Timestamp('2021-12-09 23:00:00', freq='H'))

>>> # access a particular partition
... ddf.partitions[1]
Dask DataFrame Structure:
                  a       b
npartitions=1
2021-09-11     int64  object
2021-09-21       ...     ...
Dask Name: blocks, 11 tasks

Indexing

Indexing Dask collections feels just like slicing NumPy arrays or pandas DataFrame.

>>> ddf.b
Dask Series Structure:
npartitions=10
2021-09-01 00:00:00    object
2021-09-11 00:00:00       ...
                        ...
2021-11-30 00:00:00       ...
2021-12-09 23:00:00       ...
Name: b, dtype: object
Dask Name: getitem, 20 tasks

>>> ddf["2021-10-01": "2021-10-09 5:00"]
Dask DataFrame Structure:
                                 a       b
npartitions=1
2021-10-01 00:00:00.000000000  int64  object
2021-10-09 05:00:59.999999999    ...     ...
Dask Name: loc, 11 tasks

Computation

Dask is lazily evaluated. The result from a computation isn’t computed until you ask for it. Instead, a Dask task graph for the computation is produced.

Anytime you have a Dask object and you want to get the result, call compute:

>>> ddf["2021-10-01": "2021-10-09 5:00"].compute()
                     a  b
2021-10-01 00:00:00  720  a
2021-10-01 01:00:00  721  b
2021-10-01 02:00:00  722  c
2021-10-01 03:00:00  723  a
2021-10-01 04:00:00  724  d
...                  ... ..
2021-10-09 01:00:00  913  b
2021-10-09 02:00:00  914  c
2021-10-09 03:00:00  915  a
2021-10-09 04:00:00  916  d
2021-10-09 05:00:00  917  d

[198 rows x 2 columns]

Methods

Dask collections match existing numpy and pandas methods, so they should feel familiar. Call the method to set up the task graph, and then call compute to get the result.

>>> ddf.a.mean()
dd.Scalar<series-..., dtype=float64>

>>> ddf.a.mean().compute()
1199.5

>>> ddf.b.unique()
Dask Series Structure:
npartitions=1
   object
      ...
Name: b, dtype: object
Dask Name: unique-agg, 33 tasks

>>> ddf.b.unique().compute()
0    a
1    b
2    c
3    d
4    e
Name: b, dtype: object

Methods can be chained together just like in pandas

>>> result = ddf["2021-10-01": "2021-10-09 5:00"].a.cumsum() - 100
... result
Dask Series Structure:
npartitions=1
2021-10-01 00:00:00.000000000    int64
2021-10-09 05:00:59.999999999      ...
Name: a, dtype: int64
Dask Name: sub, 16 tasks

>>> result.compute()
2021-10-01 00:00:00       620
2021-10-01 01:00:00      1341
2021-10-01 02:00:00      2063
2021-10-01 03:00:00      2786
2021-10-01 04:00:00      3510
                        ...
2021-10-09 01:00:00    158301
2021-10-09 02:00:00    159215
2021-10-09 03:00:00    160130
2021-10-09 04:00:00    161046
2021-10-09 05:00:00    161963
Freq: H, Name: a, Length: 198, dtype: int64

Visualize the Task Graph

So far we’ve been setting up computations and calling compute. In addition to triggering computation, we can inspect the task graph to figure out what’s going on.

>>> result.dask
HighLevelGraph with 7 layers.
<dask.highlevelgraph.HighLevelGraph object at 0x7f129df7a9d0>
1. from_pandas-0b850a81e4dfe2d272df4dc718065116
2. loc-fb7ada1e5ba8f343678fdc54a36e9b3e
3. getitem-55d10498f88fc709e600e2c6054a0625
4. series-cumsum-map-131dc242aeba09a82fea94e5442f3da9
5. series-cumsum-take-last-9ebf1cce482a441d819d8199eac0f721
6. series-cumsum-d51d7003e20bd5d2f767cd554bdd5299
7. sub-fed3e4af52ad0bd9c3cc3bf800544f57

>>> result.visualize()
Dask task graph for the Dask dataframe computation. The task graph shows a "loc" and "getitem" operations selecting a small section of the dataframe values, before applying a cumulative sum "cumsum" operation, then finally subtracting a value from the result.

Low-Level Interfaces

Often when parallelizing existing code bases or building custom algorithms, you run into code that is parallelizable, but isn’t just a big DataFrame or array.

Dask Delayed lets you to wrap individual function calls into a lazily constructed task graph:

import dask

@dask.delayed
def inc(x):
   return x + 1

@dask.delayed
def add(x, y):
   return x + y

a = inc(1)       # no work has happened yet
b = inc(2)       # no work has happened yet
c = add(a, b)    # no work has happened yet

c = c.compute()  # This triggers all of the above computations

Scheduling

After you have generated a task graph, it is the scheduler’s job to execute it (see Scheduling).

By default, for the majority of Dask APIs, when you call compute on a Dask object, Dask uses the thread pool on your computer (a.k.a threaded scheduler) to run computations in parallel. This is true for Dask Array, Dask DataFrame, and Dask Delayed. The exception being Dask Bag which uses the multiprocessing scheduler by default.

If you want more control, use the distributed scheduler instead. Despite having “distributed” in it’s name, the distributed scheduler works well on both single and multiple machines. Think of it as the “advanced scheduler”.

This is how you set up a cluster that uses only your own computer.

>>> from dask.distributed import Client
...
... client = Client()
... client
<Client: 'tcp://127.0.0.1:41703' processes=4 threads=12, memory=31.08 GiB>

Once you create a client, any computation will run on the cluster that it points to.

Diagnostics

When using a distributed cluster, Dask provides a diagnostics dashboard where you can see your tasks as they are processed.

>>> client.dashboard_link
'http://127.0.0.1:8787/status'

To learn more about those graphs take a look at Dashboard Diagnostics.