Arrays

Dask arrays are blocked numpy arrays

Dask arrays coordinate many Numpy arrays, arranged into chunks within a grid. They support a large subset of the Numpy API.

Start Dask Client for Dashboard

Starting the Dask Client is optional. It will provide a dashboard which is useful to gain insight on the computation.

The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.

In [1]:
from dask.distributed import Client, progress
client = Client(processes=False, threads_per_worker=4, n_workers=1, memory_limit='2GB')
client
Out[1]:

Client

Cluster

  • Workers: 1
  • Cores: 4
  • Memory: 2.00 GB

Create Random array

This creates a 10000x10000 array of random numbers, represented as many numpy arrays of size 1000x1000 (or smaller if the array cannot be divided evenly). In this case there are 100 (10x10) numpy arrays of size 1000x1000.

In [2]:
import dask.array as da
x = da.random.random((10000, 10000), chunks=(1000, 1000))
x
Out[2]:
dask.array<random_sample, shape=(10000, 10000), dtype=float64, chunksize=(1000, 1000)>

Use NumPy syntax as usual

In [3]:
y = x + x.T
z = y[::2, 5000:].mean(axis=1)
z
Out[3]:
dask.array<mean_agg-aggregate, shape=(5000,), dtype=float64, chunksize=(500,)>

Call .compute() when you want your result as a NumPy array.

If you started Client() above then you may want to watch the status page during computation.

In [4]:
z.compute()
Out[4]:
array([1.00382251, 1.00412435, 1.00436408, ..., 1.01011056, 0.99062839,
       1.00464329])

Persist data in memory

If you have the available RAM for your dataset then you can persist data in memory.

This allows future computations to be much faster.

In [5]:
y = y.persist()
In [6]:
%time y[0, 0].compute()
CPU times: user 1.78 s, sys: 848 ms, total: 2.63 s
Wall time: 4.84 s
Out[6]:
1.7967013218111942
In [7]:
%time y.sum().compute()
CPU times: user 332 ms, sys: 180 ms, total: 512 ms
Wall time: 352 ms
Out[7]:
100003913.47056422

Right click to download this notebook from GitHub.