gibuu is hosted by Hepforge, IPPP Durham
GiBUU

Changes between Version 4 and Version 5 of AnnotatedJobHarp


Ignore:
Timestamp:
Jan 21, 2009, 9:39:49 PM (16 years ago)
Author:
gallmei
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • AnnotatedJobHarp

    v4 v5  
    3737      numEnsembles= 100         ! number of ensembles
    3838}}}
    39 The number of ensembles the code treats in the one run. Together with num_runs_SameEnergy (below) responsible for the statistics to collect in the total run. If no potentials are switched on and also the particle density is calculated analytically (as in the given example here), only the total number numEnsembles*num_runs_SameEnergy matters. (Nevertheless, the larger numEnsembles, the more memory the code occupies.)
     39The number of ensembles the code treats in one run. Together with num_runs_SameEnergy (see below) responsible for the statistics to collect in the total run. If no potentials are switched on and also the particle density is calculated analytically (as given here), only the total number numEnsembles*num_runs_SameEnergy matters. Nevertheless, the larger numEnsembles, the more memory the code occupies. (see also discussions below.)
    4040{{{
    4141
     
    5151!      length_perturbative = 4000 ! adjust according target nucleus
    5252}}}
    53 Here we set the size the code should allocate for the perturbative particle vector of the final state particles. The actual array size is numEnsembles*length_perturbative times the number of bytes we need to store all information for one individual particle. For larger target nuclei you need more possible entries. On the other hand you should not choose the vector too large, since you may run into problems with memory. Additionally, although almost empty, a particle vector chosen too large may also slow down the code.
     53Here we set the size the code should allocate for the perturbative particle vector of the final state particles. The actual array size is numEnsembles*length_perturbative times the number of bytes we need to store all information for one individual particle. For larger target nuclei you need more possible entries. On the other hand you should not choose the vector too large, since you may run into problems with memory. Additionally, although almost empty, a particle vector chosen too large slows down the code.
    5454Depending on energy, process type and size of target nucleus one should find a compromise. A good hint is the line
    5555''#### pertParticles is occupied by   xxx% with   yyy% ensembles over 80%'' the code prints every timestep.