The Queen's University of Belfast

Parallel Computer Centre
[Next] [Previous] [Top]
9 Further Development
Fortran 95 will be a fairly small update to Fortran 90, consisting mainly of clarifications and corrections to Fortran 90. The next major changes are expected in Fortran 2000.
Fortran 95 will, however, provide some new features including:
FORALL (i=1:n) a(i,i)=i
FORALL (i=1:n,j=1:n,y(i,j)/=0 .AND. i/=j) x(i,j)=1.0/y(i,j)
FORALL (i=1:n)
a(i,i)=i
b(i)=i*i
END
- PURE attribute
Allowing PURE procedures safe for use in FORALL statements.
- CPU time intrinsic inquiry function
CALL CPU_TIME(t1)
- Allocatable dummy arguments and results
- Nested WHERE
WHERE (mask1)
...
WHERE (mask2)
...
ELSEWHERE
...
ENDWHERE
ELSEWHERE
...
ENDWHERE
- Object Initialisation
Initial pointer or type default status.
REAL, POINTER :: P(:)=>NULL()
TYPE string
CHARACTER, POINTER :: ch(:)=>NULL()
ENDTYPE
It is important, nowadays, that a new programming language standard should permit efficient compilation and execution of code on supercomputers as well as conventional computers. Fortran 90 is said to be efficient on conventional computers and on vector processors, but less efficient on parallel computers. However, the limelight of supercomputing research has recently shifted away from vector computers and towards parallel and "massive" parallel computers. This interest in parallel computers has lead to the development of two de facto standards:
- High Performance Fortran (HPF).
- Message Passing Interface (MPI).
The goal of High Performance Fortran was to provide a set of language extensions to Fortran 90 to support:
- Data parallel programming.
- Top performance on MIMD and SIMD computers with non-uniform memory access.
- Code turning for various architectures.
- Minimal deviation from other standards.
- Define open interfaces to other languages.
- Encourage input from the high performance computing community.
Fortran 90 supports data parallel programming through the array operations and intrinsics. HPF extends this support with:
- Compiler directives for data alignment and distribution.
- Concurrent execution features using the FORALL statement.
- A number of intrinsic functions to enquire about machine specific details.
- A number of extrinsic functions which provide an escape mechanism from HPF.
- A library of routines to support global operations.
MPI is a proposed standard Message Passing Interface for:
- Explicit message passing.
- Application programs.
- MIMD distributed memory concurrent computers.
- Workstation networks.
Such a standard is required for several reasons:
- Portability and ease-of-use.
- Time right for standard.
- Library construction.
- Prerequisite for development of concurrent software industry.
- Provides hardware vendors with well-defined set of routines that they must implement efficiently.
MPI contains:
- Point-to-point message passing.
- Blocking and non-blocking sending and receiving in 3 modes: ready, standard and synchronous.
- Generalising the description of buffers, the type and the process identifier, heterogeneity.
- Collective communication routines.
- Data movement (one-all and all-all versions of the broadcast, scatter, and gather routines).
- Global computation (reduce and scan routines).
- Support for process groups and communication contexts.
- Communicators combine context and group for message security and thread safety.
- Support for application topologies (grids and graphs).
[Next] [Previous] [Top]
All documents are the responsibility of, and copyright, © their authors and do not represent the views of The Parallel Computer Centre, nor of The Queen's University of Belfast.
Maintained by Alan Rea, email A.Rea@qub.ac.uk
Generated with CERN WebMaker