01 Data Access

Dask Dataframes can read and store data in many of the same formats as Pandas dataframes. In this example we read and write data with the popular CSV and Parquet formats, and discuss best practices when using these formats.

In [1]:
from IPython.display import HTML

HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/0eEsIA0O1iE?rel=0&amp;controls=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>')
Out[1]:

Start Dask Client for Dashboard

Starting the Dask Client is optional. It will provide a dashboard which is useful to gain insight on the computation.

The link to the dashboard will become visible when you create the client below. We recommend having it open on one side of your screen while using your notebook on the other side. This can take some effort to arrange your windows, but seeing them both at the same is very useful when learning.

In [2]:
from dask.distributed import Client
client = Client(n_workers=1, threads_per_worker=4, processes=False, memory_limit='2GB')
client
Out[2]:

Client

Cluster

  • Workers: 1
  • Cores: 4
  • Memory: 2.00 GB

Create artificial dataset

First we create an artificial dataset and write it to many CSV files.

You don't need to understand this section, we're just creating a dataset for the rest of the notebook.

In [3]:
import dask
df = dask.datasets.timeseries()
df
Out[3]:
Dask DataFrame Structure:
id name x y
npartitions=30
2000-01-01 int64 object float64 float64
2000-01-02 ... ... ... ...
... ... ... ... ...
2000-01-30 ... ... ... ...
2000-01-31 ... ... ... ...
Dask Name: make-timeseries, 30 tasks
In [4]:
import os
import datetime

if not os.path.exists('data'):
    os.mkdir('data')

def name(i):
    """ Provide date for filename given index
    
    Examples
    --------
    >>> name(0)
    '2000-01-01'
    >>> name(10)
    '2000-01-11'
    """
    return str(datetime.date(2000, 1, 1) + i * datetime.timedelta(days=1))
    
df.to_csv('data/*.csv', name_function=name);

Read CSV files

We now have many CSV files in our data directory, one for each day in the month of January 2000. Each CSV file holds timeseries data for that day. We can read all of them as one logical dataframe using the dd.read_csv function with a glob string.

In [5]:
!ls data/*.csv | head
data/2000-01-01.csv
data/2000-01-02.csv
data/2000-01-03.csv
data/2000-01-04.csv
data/2000-01-05.csv
data/2000-01-06.csv
data/2000-01-07.csv
data/2000-01-08.csv
data/2000-01-09.csv
data/2000-01-10.csv
In [6]:
!head data/2000-01-01.csv
timestamp,id,name,x,y
2000-01-01 00:00:00,1007,Alice,0.13273968247244894,-0.3591496304938018
2000-01-01 00:00:01,1019,Ingrid,-0.7211667658962952,0.2155833582807316
2000-01-01 00:00:02,1031,Laura,-0.8483624532309562,0.5310943164721875
2000-01-01 00:00:03,1002,Norbert,0.9331389807942909,-0.16486881516840524
2000-01-01 00:00:04,1007,Patricia,0.021305652105759743,-0.7485580429393366
2000-01-01 00:00:05,978,Bob,0.37621395053139883,0.13591382131033947
2000-01-01 00:00:06,990,Michael,-0.14773839388319754,0.5917850291201492
2000-01-01 00:00:07,957,Yvonne,0.47067965454372684,-0.11887037102759024
2000-01-01 00:00:08,1029,Wendy,0.9886915809624768,-0.08455511231906487
In [7]:
!head data/2000-01-30.csv
timestamp,id,name,x,y
2000-01-30 00:00:00,1038,Norbert,-0.0972825061551259,0.047001564478093893
2000-01-30 00:00:01,989,Zelda,0.8952342172683243,0.8398718057691255
2000-01-30 00:00:02,1000,Xavier,-0.7385854166447912,0.3333609469886645
2000-01-30 00:00:03,998,Kevin,-0.21613818836346232,0.7998954531843563
2000-01-30 00:00:04,968,Kevin,0.3020353987336595,0.3323407119013779
2000-01-30 00:00:05,945,Alice,0.5354632579149492,-0.8185093914324069
2000-01-30 00:00:06,988,Hannah,0.32565849954482506,-0.8614212408573039
2000-01-30 00:00:07,986,Sarah,-0.5452498652120161,0.35720031457057777
2000-01-30 00:00:08,1014,Quinn,0.8952205211281721,-0.6288167583068145

We can read one file with pandas.read_csv or many files with dask.dataframe.read_csv

In [8]:
import pandas as pd

df = pd.read_csv('data/2000-01-01.csv')
df.head()
Out[8]:
timestamp id name x y
0 2000-01-01 00:00:00 1007 Alice 0.132740 -0.359150
1 2000-01-01 00:00:01 1019 Ingrid -0.721167 0.215583
2 2000-01-01 00:00:02 1031 Laura -0.848362 0.531094
3 2000-01-01 00:00:03 1002 Norbert 0.933139 -0.164869
4 2000-01-01 00:00:04 1007 Patricia 0.021306 -0.748558
In [9]:
import dask.dataframe as dd

df = dd.read_csv('data/2000-*-*.csv')
df
Out[9]:
Dask DataFrame Structure:
timestamp id name x y
npartitions=30
object int64 object float64 float64
... ... ... ... ...
... ... ... ... ... ...
... ... ... ... ...
... ... ... ... ...
Dask Name: from-delayed, 90 tasks
In [10]:
df.head()
Out[10]:
timestamp id name x y
0 2000-01-01 00:00:00 1007 Alice 0.132740 -0.359150
1 2000-01-01 00:00:01 1019 Ingrid -0.721167 0.215583
2 2000-01-01 00:00:02 1031 Laura -0.848362 0.531094
3 2000-01-01 00:00:03 1002 Norbert 0.933139 -0.164869
4 2000-01-01 00:00:04 1007 Patricia 0.021306 -0.748558

Tuning read_csv

The Pandas read_csv function has many options to help you parse files. The Dask version uses the Pandas function internally, and so supports many of the same options. You can use the ? operator to see the full documentation string.

In [11]:
pd.read_csv?
In [12]:
dd.read_csv?

In this case we use the parse_dates keyword to parse the timestamp column to be a datetime. This will make things more efficient in the future. Notice that the dtype of the timestamp column has changed from object to datetime64[ns] .

In [13]:
df = dd.read_csv('data/2000-*-*.csv', parse_dates=['timestamp'])
df
Out[13]:
Dask DataFrame Structure:
timestamp id name x y
npartitions=30
datetime64[ns] int64 object float64 float64
... ... ... ... ...
... ... ... ... ... ...
... ... ... ... ...
... ... ... ... ...
Dask Name: from-delayed, 90 tasks

Do a simple computation

Whenever we operate on our dataframe we read through all of our CSV data so that we don't fill up RAM. This is very efficient for memory use, but reading through all of the CSV files every time can be slow.

In [14]:
%time df.groupby('name').x.mean().compute()
CPU times: user 6.41 s, sys: 672 ms, total: 7.08 s
Wall time: 5.98 s
Out[14]:
name
Alice       0.002437
Bob         0.000657
Charlie     0.000605
Dan        -0.000003
Edith      -0.001029
Frank       0.000309
George      0.001402
Hannah      0.000036
Ingrid     -0.001477
Jerry       0.000150
Kevin       0.000595
Laura       0.001253
Michael    -0.000660
Norbert     0.001247
Oliver     -0.003023
Patricia   -0.000968
Quinn       0.002016
Ray        -0.001267
Sarah       0.001090
Tim        -0.001538
Ursula     -0.000649
Victor      0.000048
Wendy      -0.001192
Xavier      0.001511
Yvonne     -0.001698
Zelda      -0.004163
Name: x, dtype: float64

Write to Parquet

Instead, we'll store our data in Parquet, a format that is more efficient for computers to read and write.

In [15]:
df.to_parquet('data/2000-01.parquet', engine='pyarrow')
In [16]:
!ls data/2000-01.parquet/
_common_metadata  part.16.parquet  part.23.parquet  part.3.parquet
part.0.parquet	  part.17.parquet  part.24.parquet  part.4.parquet
part.10.parquet   part.18.parquet  part.25.parquet  part.5.parquet
part.11.parquet   part.19.parquet  part.26.parquet  part.6.parquet
part.12.parquet   part.1.parquet   part.27.parquet  part.7.parquet
part.13.parquet   part.20.parquet  part.28.parquet  part.8.parquet
part.14.parquet   part.21.parquet  part.29.parquet  part.9.parquet
part.15.parquet   part.22.parquet  part.2.parquet

Read from Parquet

In [17]:
df = dd.read_parquet('data/2000-01.parquet', engine='pyarrow')
df
Out[17]:
Dask DataFrame Structure:
timestamp id name x y
npartitions=30
datetime64[ns] int64 object float64 float64
... ... ... ... ...
... ... ... ... ... ...
... ... ... ... ...
... ... ... ... ...
Dask Name: read-parquet, 30 tasks
In [18]:
%time df.groupby('name').x.mean().compute()
CPU times: user 1.4 s, sys: 232 ms, total: 1.64 s
Wall time: 1.5 s
Out[18]:
name
Alice       0.002437
Bob         0.000657
Charlie     0.000605
Dan        -0.000003
Edith      -0.001029
Frank       0.000309
George      0.001402
Hannah      0.000036
Ingrid     -0.001477
Jerry       0.000150
Kevin       0.000595
Laura       0.001253
Michael    -0.000660
Norbert     0.001247
Oliver     -0.003023
Patricia   -0.000968
Quinn       0.002016
Ray        -0.001267
Sarah       0.001090
Tim        -0.001538
Ursula     -0.000649
Victor      0.000048
Wendy      -0.001192
Xavier      0.001511
Yvonne     -0.001698
Zelda      -0.004163
Name: x, dtype: float64

Select only the columns that you plan to use

Parquet is a column-store, which means that it can efficiently pull out only a few columns from your dataset. This is good because it helps to avoid unnecessary data loading.

In [19]:
%%time
df = dd.read_parquet('data/2000-01.parquet', columns=['name', 'x'], engine='pyarrow')
df.groupby('name').x.mean().compute()
CPU times: user 1.32 s, sys: 132 ms, total: 1.46 s
Wall time: 1.32 s

Here the difference is not that large, but with larger datasets this can save a great deal of time.


Right click to download this notebook from GitHub.