OpenMP - Frequently Asked Questions (FAQ)

What is OpenMP?

OpenMP is a specification for a set of compiler directives, library routines, and environment variables that can be used to specify shared memory parallelism in Fortran and C/C++ programs.

Why a new standard?

Shared-memory parallel programming directives have never been standardized in the industry. An earlier standardization effort, ANSI X3H5 was never formally adopted. So vendors have each provided a different set of directives, very similar in syntax and semantics, and each used a unique comment notation for "portability". OpenMP consolidates these directive sets into a single syntax and semantics, and finally delivers the long-awaited promise of single source portability for shared-memory parallelism.
OpenMP also addresses the inability of current shared-memory directive sets to deal with coarse grain parallelism. Limited support for coarse grain work has led to developers thinking that shared-memory parallel programming was inherently limited to fine-grain parallelism -- this isn't the case with OpenMP. OpenMP goes beyond standardizing fine-grain parallelism by introducing functionality useful for writing coarse grain parallel applications.

SGI and Cray already have parallel directives; why new ones?

The SGI MIPSpro Power compiler directives (DOACROSS) and Cray Autotasking directives (DOALL) provide much of the same functionality as OpenMP. OpenMP provided an opportunity for the two companies to revisit the requirements for parallel programming, review the strengths and weaknesses of these earlier sets of directives, and design a more refined parallel model. OpenMP goes beyond these earlier models with powerful new features like orphaning, necessary for coarse grain parallel applications.
SGI encourages you to move to OpenMP as soon as practical. The additional functionality and the promise of industry-wide portability make OpenMP the model of choice. But the older models will continue to be supported for some period of time, allowing you to make the move to OpenMP when it best suits your schedule and requirements.

How does OpenMP compare with...

What about nested parallelism?

Nested parallelism is permitted by the OpenMP specification. Supporting nested parallelism effectively can be difficult, and we expect most vendors (including SGI) will start out by executing nested parallel constructs on a single thread. At the same time, vendors will be experimenting with nested parallelism to better understand how to approach this topic. We then expect the OpenMP group to add this functionality to the specifications.

What about task parallelism?

Full support for task parallelism is not included in the OpenMP specification. This may be added in future extensions.

What if I just want loop-level parallelism?

OpenMP fully supports loop-level parallelism. Loop-level parallelism is useful for applications which have lots of coarse loop-level parallelism, especially those that will never be run on large numbers of processors or for which restructuring the source code is either impractical or disallowed. Typically, though, the amount of loop-level parallelism in an application is limited, and this in turn limits the scalability of the application.
OpenMP allows you to use loop-level parallelism as a way to start scaling your application for multiple processors, but then move into coarser grain parallelism, while maintaining the value of your earlier investment. This incremental development strategy avoids the all-or-none risks involved in moving to message-passing or other parallel programming models.

What does orphaning mean?

In early shared-memory models, parallel directives were only permitted within the lexical extent of parallel regions. What this means to non-compiler writers is that the directives had to be defined in such a way that all information needed to parallelize a loop or subroutine had to be specified within the source for that loop or subroutine. If another subroutine was called, parallel information specific to that subroutine had to be specified at the call site, or the called subroutine had to be (manually) inlined. This simplified definition and implementation of the directives, and was sufficient for the moderate, loop-level parallelism that dominated the use of these models. But this approach made the source code more difficult to maintain, especially for highly scalable applications.
Orphaning allows parallel directives to be specified outside the lexical extent. So now a subroutine can be written for use from a number of parallel regions, and parallel directives needed by that subroutine embedded within its source, instead of having to be replicated everywhere that calls it. This is a much more natural place to provide this input to the compiler, and avoids programming errors that result when the earlier style is used for complex applications. Orphaning is crucial to implementing coarse grain parallel algorithms, and to the development of portable, parallel libraries.

What languages does OpenMP work with?

OpenMP specifications are now available for the Fortran, C, and C++ languages. The OpenMP specification does not include any constructs that require specific Fortran 90 or C++ features.

Which vendors are supporting OpenMP?

SGI led the effort to define the OpenMP API, along with various other vendors. For a current list of vendors, see the OpenMP web site.

What role did SGI play in the development of OpenMP?

SGI initiated and led the initial development of OpenMP. The effort began as an internal project in early 1997 when SGI and Cray engineers started looking for ways to converge the two companies' parallel directives, and then decided to include other companies in order to make the new interfaces an industry-wide standard. SGI has continued to maintain its leadership as the OpenMP effort has moved forward, and will remain heavily involved in the future.

What kind of performance can I expect?

OpenMP is an explicit programming model, in that you have full control over what gets parallelized and how data is referenced. So your performance will be determined by your algorithm, code structure, and the underlying system performance, and not by decisions made by the compiler.
For SGI IRIX and UNICOS systems, OpenMPdelivers performance comparable to that delivered by the MIPSpro and Autotasking directives currently available on those systems. Because of the similarities between OpenMP and the directives currently available on these systems, existing back-end compiler and library technologies were used to implement OpenMP, and this technology is what determines the ultimate performance of the model.

What about NUMA (Non-Uniform Memory Access) issues?

The current definition of OpenMP does not include any directives or features specifically tied to NUMA architectures. It is expected that for systems where this is important, vendors (including SGI) will supply extensions to OpenMP.
SGI Origin systems have a NUMA architecture, specifically referred to as S2MP. The current MIPSpro Power parallel compiler directives includes several directives that support data distribution on Origin systems. The implementation of OpenMP in MIPSpro compilers allows full interoperability with these data distribution directives. This capability allows an application developer to exploit the portability of OpenMP for expressing parallelism, while using these data distribution directives for NUMA-specific optimizations on the Origin.

Is OpenMP scalable?

OpenMP can deliver scalability for applications using shared-memory parallel programming. Significant effort was spent to ensure that OpenMP can be used for scalable applications.
In the past, many programmers using shared-memory directives were limited to loop-level parallelism, a form of fine grain parallelism. And OpenMP also provides good support for loop-level parallelism. But much of the parallelism inherent in an application may not be expressible simply as iterations of loops, and so loop-level parallelism is inherently non-scalable for most real applications.
OpenMP goes beyond earlier shared-memory directives in that it also supports more scalable, coarse grain parallelism. OpenMP introduces the concept of "orphan" directives, which allow more complex code and the use of parallelized libraries within an application. OpenMP also includes directives that allow you to divide work up among threads.

What about non-shared memory machines like the CRAY T3E or networks of workstations?

As much as it would be nice to think that a single programming model (OpenMP or MPI or HPF or whatever) might run well on all architectures, this is not the case today.
OpenMP was designed to exploit certain characteristics of shared-memory architectures. The ability to directly access memory throughout the system (with minimum latency and no explicit address mapping) combined with very fast shared memory locks, makes shared-memory architectures best suited for supporting OpenMP.
Systems that don't fit the classic shared-memory architecture may provide hardware or software layers that present the appearance of a shared-memory system, but often at the cost of higher latencies or special limitations. For example, OpenMP could be implemented for a distributed memory system on top of MPI, so OpenMP's latencies would be greater than that of MPI (whereas typically the reverse is the case on a shared-memory system).
The extent to which these latencies or limitations reduce application portability or performance will help dictate whether vendors choose to develop OpenMP implementations for distributed memory systems. SGI currently has no plans to implement OpenMP for the CRAY T3E or for running applications across systems in a cluster.

Why will OpenMP succeed when PCF and X3H5 failed?

There are a variety of reasons OpenMP will succeed at being accepted as a standard, where earlier efforts failed.

What about support from debugging and performance tools?

On SGI systems, OpenMP is supported by the standard development tools:
Other system vendors and third-party developers offer tools that support OpenMP; the OpenMP web site includes information on these.

Can I intermix <programming model> with OpenMP?

The ability to intermix OpenMP with other parallel programming models is not dictated by the OpenMP specification, but instead will be defined by the specific implementations. We expect that most implementations will permit mixing of certain models with OpenMP, and this will be driven by a combination of technical feasibility and customer needs.
SGI currently supports the following combinations:
With so many models available, the number of potential combinations is daunting. In practice, few applications really need to intermix different models. Since OpenMP supports the full range of fine- through coarse-grain parallelism, in many cases developers will find it meets all their needs.

What kind of application code would you recommend use OpenMP?

Any parallel application being developed to run on modern multiprocessor systems, where good performance is important.
Many applications today that run on SMP systems use vendor-specific directives, and without a standard, have multiple sequences of similar directives scattered throughout the application (one for SGI, one for Cray, one for HP, etc.) OpenMP finally offers the opportunity to replace these sets of vendor-specific directives by a single, portable directive.

How will OpenMP be managed as a standard, over the long-term? Who owns it?

Planning is underway to establish an OpenMP Architecture Review Board (ARB) to manage the OpenMP specifications and deal with issues about validation and conformance of OpenMP implementations. SGI will be a key member of this organization.
The OpenMP web site explains how organizations may join the OpenMP ARB.

What does it cost me/who do I get it from/when do I get OpenMP?

For SGI IRIX and UNICOS systems, OpenMP is supported through standard compiler and programming environment releases; there is no special licensing required to use OpenMP.

Who is using OpenMP? What applications are using it?

Many application developers have begun using OpenMP in their applications, or intend to use it in future releases. The "partners" section in the OpenMP web site lists many of these developers.

Where do I find out more?

For general information on OpenMP, see the OpenMP web site: www.openmp.org.

Copyright © 2000 Silicon Graphics, Inc. All rights reserved. | Trademark Information