Quantcast
Channel: Working notes by Matthew Rocklin - SciPy
Viewing all articles
Browse latest Browse all 100

Dask Release 0.18.0

$
0
0

This work is supported by Anaconda Inc.

I’m pleased to announce the release of Dask version 0.18.0. This is a major release with breaking changes and new features. The last release was 0.17.5 on May 4th. This blogpost outlines notable changes since the the last release blogpost for 0.17.2 on March 21st.

You can conda install Dask:

conda install dask

or pip install from PyPI:

pip install dask[complete] --upgrade

Full changelogs are available here:

We list some breaking changes below, followed up by changes that are less important, but still fun afterwards.

Notable Breaking changes

Centralized configuration

Taking full advantage of Dask sometimes requires user configuration. This might be to control logging verbosity, specify cluster configuration, provide credentials for security, or any of several other options that arise in production.

We’ve found that different computing cultures like to specify configuration in several different ways:

  1. Configuration files
  2. Environment variables
  3. Directly within Python code

Previously this was handled with a variety of different solutions among the different dask-foo subprojects. Dask.distributed had one system, dask-kubernetes had another, and so on.

Now we centralize configuration in the dask.config module, which collects configuration from config files, environment variables, and runtime code, and makes it centrally available to all Dask subprojects. A number of Dask subprojects (dask.distributed, dask-kubernetes, and dask-jobqueue), are being co-released at the same time to take advantage of this.

If you were actively using Dask.distributed’s configuration files some things have changed:

  1. The configuration is now namespaced and more heavily nested. Here is an example from the dask.distributed default config file today:

    distributed:version:2scheduler:allowed-failures:3# number of retries before a task is considered badbandwidth:100000000# 100 MB/s estimated worker-worker bandwidthwork-stealing:True# workers should steal tasks from each otherworker-ttl:null# like '60s'. Workers must heartbeat faster than thisworker:multiprocessing-method:forkserveruse-file-locking:True
  2. The default configuration location has moved from ~/.dask/config.yaml to ~/.config/dask/distributed.yaml, where it will live along side several other files like kubernetes.yaml, jobqueue.yaml, and so on.

However, your old configuration files will still be found and their values will be used appropriately. We don’t make any attempt to migrate your old config values to the new location though. You may want to delete the auto-generated ~/.dask/config.yaml file at some point, if you felt like being particularly clean.

You can learn more about Dask’s configuration in Dask’s configuration documentation

Replaced the common get= keyword with scheduler=

Dask can execute code with a variety of scheduler backends based on threads, processes, single-threaded execution, or distributed clusters.

Previously, users selected between these backends using the somewhat generically named get= keyword:

x.compute(get=dask.threaded.get)x.compute(get=dask.multiprocessing.get)x.compute(get=dask.local.get_sync)

We’ve replaced this with a newer, and hopefully more clear, scheduler= keyword:

x.compute(scheduler='threads')x.compute(scheduler='processes')x.compute(scheduler='single-threaded')

The get= keyword has been deprecated and will raising a warning. It will be removed entirely on the next major release.

For more information, see documentation on selecting different schedulers

Deplaced dask.set_options with dask.config.set

Related to the configuration changes, we now include runtime state in the configuration. Previously people used to set runtime state with the dask.set_options context manager. Now we recommend using dask.config.set:

withdask.set_options(scheduler='threads'):# Before...withdask.config.set(scheduler='threads'):# After...

The dask.set_options function is now an alias to dask.config.set.

Removed the dask.array.learn subpackage

This was unadvertised and saw very little use. All functionality (and much more) is now available in Dask-ML.

Other

  • We’ve removed the token= keyword from map_blocks and moved the functionality to the name= keyword.
  • The dask.distributed.worker_client automatically rejoins the threadpool when you close the context manager
  • The Dask.distributed protocol now interprets msgpack arrays as tuples rather than lists. As a reminder, using different versions of Dask.distributed workers/scheduler/clients is not supported.

Fun changes

Arrays

Generalized Universal Functions

TODO: shorten

Dask.array now supports Numpy-style Generalized Universal Functions (gufuncs) transparently. As a reminder, Numpy’s gufuncs specify the axes along which they can be easily broadcast and the axes that they need to have fully in memory. This means that you can apply a broader set of Numpy functions on Dask arrays and have everything just work.

Here are a few examples:

  1. Apply a normal Numpy GUFunc, eig, directly onto a Dask array

    importdask.arrayasdaimportnumpyasnp# Apply a Numpy GUFunc, eig, directly onto a Dask arrayx=da.random.normal(size=(3,10,10),chunks=(2,10,10))w,v=np.linalg._umath_linalg.eig(x,output_dtypes=(float,float))

    Numpy has gufuncs of many of its internal functions, but they haven’t yet decided to switch these out to the public API.

  2. We can define our own gufunc from a Python function

    @da.as_gufunc(signature="(i)->()",output_dtypes=float,vectorize=True)defgufoo(x):returnnp.mean(x,axis=-1)y=gufoo(x)

    Here we use just np.mean, but you could possibly imagine something more complex.

  3. We can also define GUFuncs with other projects, like Numba

    importnumba@numba.vectorize([float64(float64,float64)])deff(x,y):returnx+yf(x,y)

    What I like about this is that Dask and Numba developers didn’t coordinate at all on this feature, it’s just that they both support the Numpy GUFunc protocol, so you get interactions like this for free.

However, there are several limitations to Dask’s use of gufuncs. Inner or core dimensions must be single-chunked. This is a good opportunity to use the da.rechunk method and rechunking.

For more information see Dask’s GUFunc documentation. This work was done by Markus Gonser (@magonser).

New “auto” value for rechunking

How you lay out your arrays into chunks can greatly affect the performance of many algorithms. Often expert users will explicitly rechunk their arrays before doing certain algorithms like FFTs, matrix multiplies, covariance computations, etc.. For example they might want to rechunk an array so that the 0th dimension has a single chunk and the 1st dimension expands or contracts as is necessary to maintain a good chunk size. Previously they had to do a bit of math to determine what size to make the chunks in that 1st dimension. Now they can write something like this:

withdask.config.set({'array.chunk-size':'100MiB'}):x=x.rechunk({0:x.shape[0],# single chunk in this dimension# 1: 100e6 / x.dtype.itemsize / x.shape[0],  # before we had to calculate manually1:'auto'# respond as necessary in this dimension})

This is especially valuable for higher dimensional arrays. Auto can be used anywhere that accepts chunks.

x=da.from_array(x,chunks=(10,'auto'))

To be clear though, this doesn’t do “automatic chunking”, which is a very hard problem in general. Users still need to be aware of their computations and how they want to chunk, this just makes it marginally easier to make good decisions.

Algorithmic improvements

Dask.array gained a full einsum implementation thanks to Simon Perkins.

Also, thanks to work from Jeremy Chan, Dask.array’s QR decompositions became nicer in two ways:

  1. They support short-and-fat arrays
  2. The tall-and-skinny variant now operates more robustly in less memory. Here is a friendly GIF of execution:

Native support for the Zarr format for chunked n-dimensional arrays landed thanks to Martin Durant and John A Kirkham. Zarr has been especially useful due to its speed, simple spec, support of the full NetCDF style conventions, and amenability to cloud storage.

Dataframes

TODO: Add details

Pandas 0.23.0 support

Thanks to Tom Augspurger Dask.dataframe is consistent with changes in the recent Pandas 0.23 release.

Orc support

Dask.dataframe has grown a reader for the Apache ORC format.

Orc is a format for tabular data storage that is common in the Hadoop ecosystem. The new dd.read_orc function parallelizes around similarly new ORC functionality within PyArrow . Thanks to Jim Crist for the work on the Arrow side and Martin Durant for parallelizing it with Dask.

Read_json support

Dask.dataframe now has also grown a reader for JSON files.

The dd.read_json function matches most of the the pandas.read_json API.

This came about shortly after a recent PyCon 2018 talk comparing Spark and Dask dataframe where Irina Truong mentioned that it was missing. Thanks to Martin Durant and Irina Truong for this contribution.

See the dataframe data ingestion documentation for more information about either JSON, ORC, or any of the other formats supported by Dask.dataframe.

Joblib

TODO: shorten

Joblib, the parallelism library behind most of Scikit-Learn has had a Dask backend for a while now. While it has always been pretty easy to use, it’s now becoming much easier to use well. After using this in practice for a while together with the Scikit-Learn developers, we’ve identified and smoothed over a number of usability issues:

  • Simpler startup:

    Previously you had to explicitly import distributed.joblib, in order to register the Dask backend, this is no longer necessary. Scikit-Learn will the find the backend automatically.

  • Using Dask globally:

    Previously you always had to wrap Joblib/Scikit-Learn commands within a context manager:

    fromdask.distributedimportClientclient=Client()# start a local clusterfromsklearn.externalsimportjoblibwithjoblib.parallel_backend("dask"):# run my scikit-learn code here

    This is still the right way to do things when writing serious software, but the context manager was a bit of a pain for interactive use. Now you can set Joblib to use Dask globally.

    ```

This release is timed with the following packages:

  1. dask
  2. distributed
  3. dask-kubernetes
  4. dask-jobqueue

There is also a new repository for deploying applications on YARN (a job scheduler common in Hadoop environments) called skein. Early adopters welcome.

Acknowledgements

Since March 21st, the following people have contributed to the following repositories:

The core Dask repository for parallel algorithms:

  • Andrethrill
  • Beomi
  • Brendan Martin
  • Christopher Ren
  • Guido Imperiale
  • Diane Trout
  • fjetter
  • Frederick
  • Henry Doupe
  • James Bourbeau
  • Jeremy Chen
  • Jim Crist
  • John A Kirkham
  • Jon Mease
  • Jörg Dietrich
  • Kevin Mader
  • Ksenia Bobrova
  • Larsr
  • Marc Pfister
  • Markus Gonser
  • Martin Durant
  • Matt Lee
  • Matthew Rocklin
  • Pierre-Bartet
  • Scott Sievert
  • Simon Perkins
  • Stefan van der Walt
  • Stephan Hoyer
  • Tom Augspurger
  • Uwe L. Korn
  • Yu Feng

The dask/distributed repository for distributed computing:

  • Bmaisonn
  • Grant Jenks
  • Henry Doupe
  • Irene Rodriguez
  • Irina Truong
  • John A Kirkham
  • Joseph Atkins-Turkish
  • Kenneth Koski
  • Loïc Estève
  • Marius van Niekerk
  • Martin Durant
  • Matthew Rocklin
  • Olivier Grisel
  • Russ Bubley
  • Tom Augspurger
  • Tony Lorenzo

The dask-kubernetes repository for deploying Dask on Kubernetes

  • Brendan Martin
  • J Gerard
  • Matthew Rocklin
  • Olivier Grisel
  • Yuvi Panda

The dask-jobqueue repository for deploying Dask on HPC job schedulers

  • Guillaume Eynard-Bontemps
  • jgerardsimcock
  • Joe Hamman
  • Joseph Hamman
  • Loïc Estève
  • Matthew Rocklin
  • Ray Bell
  • Rich Signell
  • Shawn Taylor
  • Spencer Clark

The dask-ml repository for scalable machine learning:

  • Christopher Ren
  • Jeremy Chen
  • Matthew Rocklin
  • Scott Sievert
  • Tom Augspurger

Viewing all articles
Browse latest Browse all 100

Trending Articles