jwjung

Forum Replies Created

Viewing 13 reply threads
  • Author
    Posts
    • #25864

      jwjung
      Moderator

      Atdyn makes use of atomic decomposition parallelization and it does not have a good parallel performance. According to your tests, I guess there will be no significant performance improvements using atdyn.

    • #25860

      jwjung
      Moderator

      Generally, you’re right. Increasing the process number improves performance. However, in this case, it might be better to check what the main reason is.

      BTW, could you let me know whether you’re using single or mixed precision? In addition, could you let me know which simulator is used between atdyn and spdyn?

    • #25858

      jwjung
      Moderator

      I think the performance result is very different from what we observed. Could you first try MPI=8 and OpenMP=3?

      For the small system, I think there is no reason to use large number of MPI and OpenMPs.

    • #25706

      jwjung
      Moderator

      In your log, you compiled with MPI. For SPDYN, you need to compile with MPI.

    • #25704

      jwjung
      Moderator

      The initial structure seems to be very unstable. Please try structure_check and nonb_limiter option. I’m not sure of the memory allocation error. One possibility is that the memory in your computer is not sufficient.

    • #25693

      jwjung
      Moderator

      Could you let us know the chapter number of tutorial to understand your question easily?

    • #25692

      jwjung
      Moderator

      To solve the problem, the easiest solution is to increase the cell size. You can increase the cell size by assigning larger pairlistdist values. Could you increase it gradually up to 17 and run it? If minimization is done well, you can use the original pairlistdist values from the next run.

    • #25691

      jwjung
      Moderator

      I checked your attached files, and the results seem to be okay.

    • #20881

      jwjung
      Moderator

      As for the regression tests, you need to write double quotation for the execution. For example,

      ./test.py “mpirun -np 8 /path/genesis/atdyn”.

      As for the log file, it is written that top_all36_prot.rtf does not exist. Please check the topology file is located in the path or not.

       

    • #20734

      jwjung
      Moderator

      Do you mean you found errors when you tried the same run as tutorial? If you found errors on your system, could you tell me there is no problem in running the system in tutorial? Could you also give us the information of compiler and mpi installed in the machines?

    • #17154

      jwjung
      Moderator

      Now, flat-bottom style position restraints are not available in genesis.

    • #16045

      jwjung
      Moderator

      Dear Kitao-sensei,

      Thank you for the feedback.
      It’s good to see that the problem is solved by using the mixed precision model.

      Best regards,
      Jaewoon Jung

    • #16042

      jwjung
      Moderator

      Dear Kitao-sensei,

      Many thanks for the outputs.

      First of all, position restraint virial is not related to the problem. It is just my recommendation.

      One thing I’m not sure is the difference of box size between the figure and output files. For example, the initial box size is (276.0087,276.0087,240.6425) in the output file, but it seems to be about (279,279,235) in the attached figure. There are also differences between the output and figure in the box size at 50 ns, too. If you’re okay, could you check it?

      Regardless of the difference between the output and figure, it would be a problem if the box size change is different in each dimension although isotropic condition is assigned. If you don’t mind, could you share the log up to 5 ns run? I think the file size could be too big to be attached in this webpage. I guess you can send it with another ways, i.e. via mail or box. My e-mail address is jung@riken.jp.

      If you don’t mind, I’d like to ask you to run with tpcontrol=BUSSI up to 10 ns to understand if the same phenomena happens.

      We’re sorry for the trouble and asking, and consider the reason of the problem and how to fix it.

      Best regards,
      Jaewoon Jung

    • #16036

      jwjung
      Moderator

      Dear Kitao sensei,

      This is Jaewoon Jung in genesis developer team. I’m sorry to reply in English.

      I checked the attached output files, and it seems that the ratio does not change.
      At 0-th step, the box size in x, y, and z dimensions are 279.8376, 279.8376, and 235.7621. The ratio of box size between x and z dimensions is 279.8376/235.7621=1.18694904737. At 2000-th step, the box size in each dimension is 279.0337, 279.0337, and 235.0873. The ratio is 279.0337/235.0873=1.18693651252. The ratio is slightly changed, but it seems to be just due to insufficient number of digits in outputs.

      Because we don’t have the same inputs, we tried to write outputs of barostat momentum given the BOXX=BOXY>BOXZ condition from our inputs, and found that the barostat momentum and the following box size scaling factors are same in all dimensions.

      Just from the output, box size in dimension is also decreasing like those in x and y dimensions. If you observe the unexpected phenomena in long time MD (not from 2000 steps in the attached file), could you show us the log (part of the log is also okay)?

      Beside the box size ratio issue written above, I have a few suggestions from the input.
      1. In [ENSEMBLE] section, you can obtain better performance by writing group_tp=yes. The reference related to this can be found from https://aip.scitation.org/doi/full/10.1063/5.0027873 .
      2. If you want to run NPT MD with positional restraint, how about including the position restraint virial in pressure? You can do it by writing pressure_position=yes in [RESTRAINTS] section.

      Please let us know if our answer is not sufficient or you have additional questions/comments.

      Best regards,
      Jaewoon Jung

    • #16474

      jwjung
      Moderator

      Dear Geng,

      In GENESIS, we use GPU for real-space nonbonded interactions (van der Waals and electrostatic) while CPU is used for other parts (bond/angle/dihedral angle calculation and recprocal-space electrostatic). In fact, the performance of GPU is limited by CPU calculation of such parts and I guess it makes the low utilization of GPU.

Viewing 13 reply threads