APBS 0.4.0 User Guide: Adaptive Poisson-Boltzmann Solver | ||
---|---|---|
<<< Previous | Installation | Next >>> |
If you were unable to find the binary pacakge for your system, or would like to compile APBS yourself, you'll need to read the instructions in this section.
In order to install APBS from the source code, you will need:
C and Fortran compilers
The APBS source code (see above)
The MALOC hardware abstraction library (available from http://www.scicomp.ucsd.edu/~mholst/codes/maloc/index.html)
A version of MPI (try MPICH) for parallel jobs.
MPI isn't strictly necessary if the async option is used. |
A compatible visualization program (see the "Visualization" section of this document)
In what follows, I'll be assuming you're using bash, a fantastic shell available on many platforms (UNIX and non-UNIX).
First, please look at the "Machine-specific notes" section of this document for appropriate compiler flags, etc. to be set via pre-configuration environmental variables. It's not a big deal if you skip this step, but APBS will run more slowly.
There are two directories you'll need to identify prior to installation. The first, which we'll call FETK_SRC, will contain the APBS and MALOC source code. This directory can be deleted after installation, if you wish. The second directory will be the permanent location for APBS and MALOC; we'll call this FETK_PREFIX. If you have root permission, you could pick a global directory such as /usr/local for this; otherwise, pick a directory for which you have write permission. The following commands set up the directories and environmental which point to them:
$ export FETK_SRC=/home/soft/src $ export FETK_PREFIX=/home/soft $ export FETK_INCLUDE=${FETK_PREFIX}/include $ export FETK_LIBRARY=${FETK_PREFIX}/lib $ mkdir -p ${FETK_SRC} ${FETK_INCLUDE} ${FETK_LIBRARY} |
If you're planning to use MPI, you'll need to set some additional environmental variables. The variable FETK_MPI_INCLUDE points to the directory where the MPI header files reside (mpi.h) and the variable FETK_MPI_LIBRARY points to the directory where the MPI libraries are located (libmpi.a or libmpich.a). For example, on my system, I type:
|
You're now ready to unpack the source code:
$ cd ${FETK_SRC} $ gzip -dc maloc.tar.gz | tar xvf - $ gzip -dc apbs-0.4.0.tar.gz | tar xvf - |
Now we need to compile the hardware-abstraction library, MALOC. First, go to the MALOC directory:
$ cd ${FETK_SRC}/maloc |
If you are not starting from a freshly-untarred version of MALOC, you need to clean up the previous distribution by:
|
Sequential execution only (no MPI):
$ ./configure --prefix=${FETK_PREFIX} |
With parallel execution (MPI):
$ ./configure --prefix=${FETK_PREFIX} --enable-mpi |
Be sure to keep an eye out for warning messages during the configuration of MALOC, especially if you are using MPI. |
$ make; make install |
APBS is configured and installed much the same way as MALOC. First, you need to configure with the autoconf configure script. As before, you can examine the various configure options with the --help option. For most platforms, no options need to be specified; APBS's autoconf setup automatically detects whether MALOC was compiled with MPI and configures itself appropriately. Therefore, most users can configure as follows:
$ cd ${FETK_SRC}/apbs $ ./configure --prefix=${FETK_PREFIX} |
If you have a vendor-supplied BLAS math library, you will probably compile a faster version of APBS if you link to it instead of the BLAS version provided with MALOC. This is done by passing the appropriate linking options to configure with the --with-blas flag. For example, suppose you had a machine-specific version of the BLAS library at /usr/local/lib/libblas.a. You would then configure APBS by:
|
$ make all $ make install |
While the APBS and MALOC autoconf configure scripts are flexible enough to work on most platforms, the resulting executables don't always offer optimal performance. We're slowly trying to provide binary support for some of the more popular platforms, this section is meant to supplement our pre-compiled binaries and provide some tips on how to get a better APBS binary on your platform. If you have tips or tricks on improving APBS performance on your machine, please let us know!
In what follows, we're denoting Pentium/Xeon 32-bit Intel machines as "IA32" and Itanium* 64-bit Intel machines as "IA64".
We are happy to now provide native APBS command line binaries for Windows. The binary is probably the best option available, but if you would still like to compile your own binaries you will need to use either the Cygwin or MinGW environments. Binaries compiled under Cygwin tend to require Cygwin DLLs and thus can only be run on systems with Cygwin. Performance for the Windows binaries and all compiled systems will be fairly mediocre as they depend on the GNU compilers.
If you do choose to use Cygwin and compile your own code, compilation should be rather straightforward.
Compilation under Linux should be very straightforward as this is the platform on which APBS was developed. This section describes various compilation options under Linux.
Nearly every Linux distribution comes with the GNU compilers; autoconf with configure with these by default. Furthermore, autoconf will automatically choose reasonable optimization (-O2) and debugging (-g) options. I haven't had very good luck improving the performance beyond what's available with -O2 and, given the availablity of the free Intel compilers, I'm not sure it's worth trying too hard.
We've mainly used the free (for Linux) Intel compilers and have observed very good performance. There were some incompatibility issues with version 7 of the Intel compilers and newer versions of Linux (particularly those running glibc 2.3, e.g. RedHat 8 and 9).
First, you need to make sure the Intel compilers are set up properly; this usually is done by sourcing one of the input bash or csh scripts provided with the compilers.
You then need to define the environmental variables appropriate to these compilers before you configure either MALOC or APBS. This is done by:
$ export CC='icc' $ export CXX='icc' $ export F77='ifort' |
$ export LDFLAGS='-static-libcxa' |
Finally, you'll want to choose some optimization options. Intel has a number of options that are specific to the type of processor you are running; the examples below assume you are running on a Pentium 4:
$ export FFLAGS='-fast -arch pn4' $ export CFLAGS='-fast -tpp5' $ export CXXFLAGS=${CFLAGS} |
Since the MALOC-supplied BLAS is not 64-bit clean, a third party BLAS must be used for installation on an Itanium. We have had good success with the Intel MKL libraries. If no independent BLAS is available you may want to try either compiling your own (like ATLAS) or using the ia64 binary instead.
If you do use the MKL libraries, you will need to modify a few command line options.
First use the disable-blas flag while compiling MALOC to prevent MALOC's BLAS from interfering with APBS installation:
$ ./configure --prefix=${FETK_PREFIX} --disable-blas $ make; make install |
Then when compiling APBS, specify that APBS is to use the MKL BLAS and the name of the actual BLAS library:
$ export INTEL_BLAS=/path/to/MKL/lib/directory $ ./configure --prefix=${FETK_PREFIX} --with-blas="-L${INTEL_BLAS} -lmkl_lapack -lmkl_ipf -ldl" --with-blas-name="mkl_lapack" $ make; make install |
There are a number of other good compilers (Portland Group, Absoft) that we have not tested with APBS. If you have experience with these, please let us know.
We are happy to now provide a Mac install package for G5 OS 10.4 (Tiger). Unfortunately this is the only binary for Mac that we have available, so users on G4s or OS 10.3 may have to compile binaries for themselves - you may want to examine the apbs-users mailing list which has a number of threads which discuss installation on Mac OS platforms. Alternatively you can try using Fink for the installation - please see Bill Scott's excellent guidelines at http://chemistry.ucsc.edu/~wgscott/xtal.
A few notes about compiling on Macintosh:
It has become apparent from the mailing lists that some "packages" of the GNU development software available for MacOS contain different major versions of the C and FORTRAN compilers. This is very bad; APBS will not compile with different versions of the C and FORTRAN compilers. If you use GCC 4.0, for instance, gfortran 4.0 will work while g77 3.3 will not. If you see link errors involving "restFP" or "saveFP" this is most likely the cause.
In gcc 4.0 (included in Xcode 2.0 and higher) the -fast option turns on the -fast-math flag. This flag optimizes by using rounding, and thus can lead to inaccuate results and should be avoided.
As it stands now the autoconf script does not support using the native vecLib framework as an architecture-tuned BLAS replacement. In testing there were only slight timing improvements over using the MALOC-supplied BLAS as it is.
We have had success using IBM's XLF for Mac in conjunction with GCC 4.0, although the corresponding XLC compilers do not seem to work under Tiger.
These are various compilation options I experimented with on the NPACI Blue Horizon platform -- we're working on acquiring a Power4 machine for additional notes. However, I expect some of the issues are applicable to other AIX machines. In what follows, I used the mpcc and mpxlf compilers.
In order to use a reasonable amount of memory during runs, you also need to specify -bmaxdata:0x80000000 and -bmaxstack:0x10000000, or whatever values are appropriate to your system. You'll also want to link to the IBM blas, mass, and essl libraries (if available) and optimize as much as possible for the specific machine you're running on. Putting it all together gives:
$ export CC=mpcc $ export CXX=mpcc $ export F77=mpxlf $ export BLASPATH=/usr/lib $ export CFLAGS="-bmaxdata:0x80000000 -bmaxstack:0x10000000 -L/usr/local/apps/mass -lmass -lessl -O3 -qstrict -qarch=pwr3 -qtune=pwr3 -qmaxmem=-1 -qcache=auto" $ export FFLAGS="-qfixed=132 -bmaxdata:0x80000000 -bmaxstack:0x10000000 -L/usr/local/apps/mass -lmass -lessl -O3 -qstrict -qarch=pwr3 -qtune=pwr3 -qmaxmem=-1 -qcache=auto" |
Compilation tips for this system are short and sweet: Whatever you do, don't use the GNU compilers; they result in very slow binaries.
Two tips:
Don't use the GNU compilers; they result in very slow binaries.
Use the vendor-supplied BLAS library.
Since the MALOC BLAS is not 64 bit clean, you must used a third party BLAS for installation on the Opteron. We have had good success using the Portland group compilers and the associated PGI BLAS libraries, although a different third party BLAS (like ATLAS) should work as well. For the Portland compilers:
Set the following flags for use with the Portland compilers:
$ export CC=pgcc $ export CFLAGS='-O2 -fPIC -fastsse -Bstatic' $ export F77=pgf77 $ export FFLAGS='-O2 -fPIC -fastsse -Bstatic' |
When compiling MALOC, disable the building of the BLAS library:
$ ./configure --prefix=${FETK_PREFIX} --disable-blas $ make; make install |
Locate the BLAS library in your PGI installation and configure APBS to use that library, and then finish the installation:
$ export BLAS_DIR='/path/to/blas' $ ./configure --with-blas="-L${BLAS_DIR} -lblas" $ make; make install |
<<< Previous | Home | Next >>> |
Binary installation | Up | Overview |