Changeset bd2278d for main_bgl_p.f
- Timestamp:
- 09/05/08 11:49:42 (16 years ago)
- Branches:
- master
- Children:
- fafe4d6
- Parents:
- 2ebb8b6
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
main_bgl_p.f
r2ebb8b6 rbd2278d 1 c**************************************************************2 c3 cThis file contains the main (PARALLEL TEMPERING JOBS ONLY,4 CFOR SINGULAR PROCESSOR JOBS USE main)5 C6 CThis file contains also the subroutine: p_init_molecule7 c8 cCopyright 2003-2005 Frank Eisenmenger, U.H.E. Hansmann,9 cShura Hayryan, Chin-Ku10 cCopyright 2007 Frank Eisenmenger, U.H.E. Hansmann,11 cJan H. Meinke, Sandipan Mohanty12 c13 CCALLS init_energy,p_init_molecule,partem_p14 C15 c**************************************************************1 ! ************************************************************** 2 ! 3 ! This file contains the main (PARALLEL TEMPERING JOBS ONLY, 4 ! FOR SINGULAR PROCESSOR JOBS USE main) 5 ! 6 ! This file contains also the subroutine: p_init_molecule 7 ! 8 ! Copyright 2003-2005 Frank Eisenmenger, U.H.E. Hansmann, 9 ! Shura Hayryan, Chin-Ku 10 ! Copyright 2007 Frank Eisenmenger, U.H.E. Hansmann, 11 ! Jan H. Meinke, Sandipan Mohanty 12 ! 13 ! CALLS init_energy,p_init_molecule,partem_p 14 ! 15 ! ************************************************************** 16 16 program pmain 17 17 … … 28 28 logical newsta 29 29 30 cc Number of replicas30 !c Number of replicas 31 31 integer num_replica 32 cc Number of processors per replica32 !c Number of processors per replica 33 33 integer num_ppr 34 cc Range of processor for crating communicators34 !c Range of processor for crating communicators 35 35 integer proc_range(3) 36 cc Array of MPI groups36 !c Array of MPI groups 37 37 integer group(MAX_REPLICA), group_partem 38 cc Array of MPI communicators38 !c Array of MPI communicators 39 39 integer comm(MAX_REPLICA), partem_comm 40 cc Array of nodes acting as masters for the energy calculation.40 !c Array of nodes acting as masters for the energy calculation. 41 41 integer ranks(MAX_REPLICA) 42 cc Configuration switch42 !c Configuration switch 43 43 integer switch 44 44 integer rep_id 45 cset number of replicas45 ! set number of replicas 46 46 double precision eols(MAX_REPLICA) 47 47 integer ndims, nldims, log2ppr, color … … 53 53 54 54 55 cMPI stuff, and random number generator initialisation55 ! MPI stuff, and random number generator initialisation 56 56 57 57 call mpi_init(ierr) … … 88 88 call sgrnd(seed) ! Initialize the random number generator 89 89 90 c=================================================== Energy setup90 ! =================================================== Energy setup 91 91 libdir='SMMP/' 92 cDirectory for SMMP libraries93 94 cThe switch in the following line is now not used.92 ! Directory for SMMP libraries 93 94 ! The switch in the following line is now not used. 95 95 flex=.false. ! .true. for Flex / .false. for ECEPP 96 96 97 cChoose energy type with the following switch instead ...97 ! Choose energy type with the following switch instead ... 98 98 ientyp = 0 99 c0 => ECEPP2 or ECEPP3 depending on the value of sh2100 c1 => FLEX101 c2 => Lund force field102 c3 => ECEPP with Abagyan corrections103 c99 ! 0 => ECEPP2 or ECEPP3 depending on the value of sh2 100 ! 1 => FLEX 101 ! 2 => Lund force field 102 ! 3 => ECEPP with Abagyan corrections 103 ! 104 104 105 105 sh2=.false. ! .true. for ECEPP/2; .false. for ECEPP3 … … 114 114 call init_energy(libdir) 115 115 116 ccalculate CPU time using MPI_Wtime()116 ! calculate CPU time using MPI_Wtime() 117 117 startwtime = MPI_Wtime() 118 118 119 119 120 c================================================= Structure setup120 ! ================================================= Structure setup 121 121 grpn = 'nh2' ! N-terminal group 122 122 grpc = 'cooh' ! C-terminal group … … 153 153 ntlml = 0 154 154 155 cDecide if and when to use BGS, and initialize Lund data structures155 ! Decide if and when to use BGS, and initialize Lund data structures 156 156 bgsprob=0.6 ! Prob for BGS, given that it is possible 157 cupchswitch= 0 => No BGS 1 => BGS with probability bgsprob158 c2 => temperature dependent choice157 ! upchswitch= 0 => No BGS 1 => BGS with probability bgsprob 158 ! 2 => temperature dependent choice 159 159 upchswitch=1 160 160 rndord=.true. 161 161 if (ientyp.eq.2) call init_lundff 162 c=================================================================163 cDistribute nodes to parallel tempering tasks164 cI assume that the number of nodes available is an integer165 cmultiple n of the number of replicas. Each replica then gets n166 cprocessors to do its energy calculation.162 ! ================================================================= 163 ! Distribute nodes to parallel tempering tasks 164 ! I assume that the number of nodes available is an integer 165 ! multiple n of the number of replicas. Each replica then gets n 166 ! processors to do its energy calculation. 167 167 num_ppr = num_proc / num_replica 168 168 … … 206 206 ! call mpi_comm_group(mpi_comm_world, group_world, error) 207 207 208 cThe current version doesn't require a separate variable j. I209 ccould just use i * num_ppr but this way it's more flexible.208 ! The current version doesn't require a separate variable j. I 209 ! could just use i * num_ppr but this way it's more flexible. 210 210 ! j = 0 211 211 ! do i = 1, num_replica … … 277 277 nml = 1 278 278 279 cRRRRRRRRRRMMMMMMMMMMMMSSSSSSSSSSDDDDDDDDDDDDD279 ! RRRRRRRRRRMMMMMMMMMMMMSSSSSSSSSSDDDDDDDDDDDDD 280 280 call rmsinit(nml,ref_pdb) 281 cRRRRRRRRRRMMMMMMMMMMMMSSSSSSSSSSDDDDDDDDDDDDD281 ! RRRRRRRRRRMMMMMMMMMMMMSSSSSSSSSSDDDDDDDDDDDDD 282 282 283 283 ! READ REFERENCE CONTACT MAP … … 294 294 end do 295 295 296 c======================================== start of parallel tempering run296 ! ======================================== start of parallel tempering run 297 297 write (*,*) "There are ", no, 298 298 & " processors available for ",rep_id … … 303 303 call partem_p(num_replica, nequi, nswp, nmes, nsave, newsta, 304 304 & switch, rep_id, partem_comm) 305 c======================================== end of parallel tempering run306 ccalculate CPU time using MPI_Wtime()305 ! ======================================== end of parallel tempering run 306 ! calculate CPU time using MPI_Wtime() 307 307 endwtime = MPI_Wtime() 308 308 … … 319 319 enddo 320 320 321 c======================================== End of main321 ! ======================================== End of main 322 322 CALL mpi_finalize(ierr) 323 323
Note:
See TracChangeset
for help on using the changeset viewer.