-
Notifications
You must be signed in to change notification settings - Fork 11
Expand file tree
/
Copy pathHISTORY.txt
More file actions
200 lines (142 loc) · 7.35 KB
/
HISTORY.txt
File metadata and controls
200 lines (142 loc) · 7.35 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
=======================
HISTORY: MPI for Python
=======================
:Author: Lisandro Dalcin
:Contact: dalcinl@gmail.com
:Web Site: http://mpi4py.googlecode.com/
:Organization: CIMEC <http://www.cimec.org.ar>
:Address: CCT CONICET, 3000 Santa Fe, Argentina
Release 1.3 [2012-01-20]
========================
* Now ``Comm.recv()`` accept a buffer to receive the message.
* Add ``Comm.irecv()`` and ``Request.{wait|test}[any|all]()``.
* Add ``Intracomm.Spawn_multiple()``.
* Better buffer handling for PEP 3118 and legacy buffer interfaces.
* Add support for attribute attribute caching on communicators,
datatypes and windows.
* Install MPI-enabled Python interpreter as
``<path>/mpi4py/bin/python-mpi``.
* Windows: Support for building with Open MPI.
Release 1.2.2 [2010-09-13]
==========================
* Add ``mpi4py.get_config()`` to retrieve information (compiler
wrappers, includes, libraries, etc) about the MPI implementation
employed to build mpi4py.
* Workaround Python libraries with missing GILState-related API calls
in case of non-threaded Python builds.
* Windows: look for MPICH2, DeinoMPI, Microsoft HPC Pack at their
default install locations under %ProgramFiles.
* MPE: fix hacks related to old API's, these hacks are broken when MPE
is built with a MPI implementations other than MPICH2.
* HP-MPI: fix for missing Fortran datatypes, use dlopen() to load the
MPI shared library before MPI_Init()
* Many distutils-related fixes, cleanup, and enhancements, better
logics to find MPI compiler wrappers.
* Support for ``pip install mpi4py``.
Release 1.2.1 [2010-02-26]
==========================
* Fix declaration in Cython include file. This declaration, while
valid for Cython, broke the simple-minded parsing used in
conf/mpidistutils.py to implement configure-tests for availability
of MPI symbols.
* Update SWIG support and make it compatible with Python 3. Also
generate an warning for SWIG < 1.3.28.
* Fix distutils-related issues in Mac OS X. Now ARCHFLAGS environment
variable is honored of all Python's ``config/Makefile`` variables.
* Fix issues with Open MPI < 1.4.2 releated to error checking and
``MPI_XXX_NULL`` handles.
Release 1.2 [2009-12-29]
========================
* Automatic MPI datatype discovery for NumPy arrays and PEP-3118
buffers. Now buffer-like objects can be messaged directly, it is no
longer required to explicitly pass a 2/3-list/tuple like ``[data,
MPI.DOUBLE]``, or ``[data, count, MPI.DOUBLE]``. Only basic types
are supported, i.e., all C/C99-native signed/unsigned integral types
and single/double precision real/complex floating types. Many thanks
to Eilif Muller for the initial feedback.
* Nonblocking send of pickled Python objects. Many thanks to Andreas
Kloeckner for the initial patch and enlightening discussion about
this enhancement.
* ``Request`` instances now hold a reference to the Python object
exposing the buffer involved in point-to-point communication or
parallel I/O. Many thanks to Andreas Kloeckner for the initial
feedback.
* Support for logging of user-defined states and events using `MPE
<http://www.mcs.anl.gov/research/projects/perfvis/>`_. Runtime
(i.e., without requiring a recompile!) activation of logging of all
MPI calls is supported in POSIX platforms implementing ``dlopen()``.
* Support for all the new features in MPI-2.2 (new C99 and F90
datatypes, distributed graph topology, local reduction operation,
and other minor enhancements).
* Fix the annoying issues related to Open MPI and Python dynamic
loading of extension modules in platforms supporting ``dlopen()``.
* Fix SLURM dynamic loading issues on SiCortex. Many thanks to Ian
Langmore for providing me shell access.
Release 1.1.0 [2009-06-06]
==========================
* Fix bug in ``Comm.Iprobe()`` that caused segfaults as Python C-API
calls were issued with the GIL released (issue #2).
* Add ``Comm.bsend()`` and ``Comm.ssend()`` for buffered and
synchronous send semantics when communicating general Python
objects.
* Now the call ``Info.Get(key)`` return a *single* value (i.e, instead
of a 2-tuple); this value is ``None`` if ``key`` is not in the
``Info`` object, or a string otherwise. Previously, the call
redundantly returned ``(None, False)`` for missing key-value pairs;
``None`` is enough to signal a missing entry.
* Add support for parametrized Fortran datatypes.
* Add support for decoding user-defined datatypes.
* Add support for user-defined reduction operations on memory
buffers. However, at most 16 user-defined reduction operations
can be created. Ask the author for more room if you need it.
Release 1.0.0 [2009-03-20]
==========================
This is the fist release of the all-new, Cython-based, implementation
of *MPI for Python*. Unfortunately, this implementation is not
backward-compatible with the previous one. The list below summarizes
the more important changes that can impact user codes.
* Some communication calls had *overloaded* functionality. Now there
is a clear distinction between communication of general Python
object with *pickle*, and (fast, near C-speed) communication of
buffer-like objects (e.g., NumPy arrays).
- for communicating general Python objects, you have to use
all-lowercase methods, like ``send()``, ``recv()``, ``bcast()``,
etc.
- for communicating array data, you have to use ``Send()``,
``Recv()``, ``Bcast()``, etc. methods. Buffer arguments to these
calls must be explicitly specified by using a 2/3-list/tuple like
``[data, MPI.DOUBLE]``, or ``[data, count, MPI.DOUBLE]`` (the
former one uses the byte-size of ``data`` and the extent of the
MPI datatype to define the ``count``).
* Indexing a communicator with an integer returned a special object
associating the communication with a target rank, alleviating you
from specifying source/destination/root arguments in point-to-point
and collective communications. This functionality is no longer
available, expressions like::
MPI.COMM_WORLD[0].Send(...)
MPI.COMM_WORLD[0].Recv(...)
MPI.COMM_WORLD[0].Bcast(...)
have to be replaced by::
MPI.COMM_WORLD.Send(..., dest=0)
MPI.COMM_WORLD.Recv(..., source=0)
MPI.COMM_WORLD.Bcast(..., root=0)
* Automatic MPI initialization (i.e., at import time) requests the
maximum level of MPI thread support (i.e., it is done by calling
``MPI_Init_thread()`` and passing ``MPI_THREAD_MULTIPLE``). In case
you need to change this behavior, you can tweak the contents of the
``mpi4py.rc`` module.
* In order to obtain the values of predefined attributes attached to
the world communicator, now you have to use the ``Get_attr()``
method on the ``MPI.COMM_WORLD`` instance::
tag_ub = MPI.COMM_WORLD.Get_attr(MPI.TAG_UB)
* In the previous implementation, ``MPI.COMM_WORLD`` and
``MPI.COMM_SELF`` were associated to **duplicates** of the (C-level)
``MPI_COMM_WORLD`` and ``MPI_COMM_SELF`` predefined communicator
handles. Now this is no longer the case, ``MPI.COMM_WORLD`` and
``MPI.COMM_SELF`` proxies the **actual** ``MPI_COMM_WORLD`` and
``MPI_COMM_SELF`` handles.
* Convenience aliases ``MPI.WORLD`` and ``MPI.SELF`` were removed. Use
instead ``MPI.COMM_WORLD`` and ``MPI.COMM_SELF``.
* Convenience constants ``MPI.WORLD_SIZE`` and ``MPI.WORLD_RANK`` were
removed. Use instead ``MPI.COMM_WORLD.Get_size()`` and
``MPI.COMM_WORLD.Get_rank()``.