This is the user forum of the DFT FLAPW code FLEUR.
It is meant to be used as a place to asks questions to other users and the developers, to provide feedback and suggestions.
The documentation of the code can be found at the FLEUR homepage.
[b][/b]
[i][/i]
[u][/u]
[code][/code]
[quote][/quote]
[spoiler][/spoiler]
[url][/url]
[img][/img]
[video][/video]
Smileys
smile
smile2
spook
alien
zunge
rose
shy
clown
devil
death
sick
heart
idee
frage
blush
mad
sad
wink
frown
crazy
grin
hmm
laugh
mund
oh
rolling_eyes
oh2
shocked
cool
[pre][/pre]
Farben
[rot][/rot]
[blau][/blau]
[gruen][/gruen]
[orange][/orange]
[lila][/lila]
[weiss][/weiss]
[schwarz][/schwarz]
wortmann
Posts: 41 | Last online: 01.06.2023
Name
Daniel Wortmann
Date registered
04.04.2021
Sex
not specified
    • wortmann has written a new post "scf error for a material in different space group " 10.18.2022

      With very high probability, your problem is a too small stackspace. FLEUR uses a significant amount of automatic variables that the compilers by default put on the stack. Hence, we need a large memory for this. Therefore, we propose to issue the command 'ulimit -s unlimited' before executing FLEUR. On Linux also a corresponding warning should be issued to the console in your setting.

      Hope this helps

      Daniel

    • wortmann has written a new post "scf error for a material in different space group " 10.17.2022

      I am sorry, but I can not reproduce the segfault. Could you:

      - send the output of "ulimit -s" on your compute node
      - run the code with "-debugtime" and past the last lines of output before the segfault.

      Daniel

    • wortmann has written a new post "scf error for a material in different space group " 10.16.2022

      Hi,

      Of course a segmentation fault is in most cases the sign of a bug. I would claim there is one exception: insufficient memory. To give more advice the following info would be helpful:

      - which Version of FLEUR are you using?
      - on which kind of machine? OS, compiler etc
      - is the stacksize set to "unlimited" in the shell?
      - are you using the inp.xml unmodified from inpgen or do you change anything?

      Hope this helps,
      Daniel

    • wortmann has written a new post "Cr input file not found" 09.29.2022

      You in addition have to update the image file. If you do not use the script, please issue the command `docker pull judft/future.noAiiDA`.

      Hope this helps

      Daniel

    • wortmann has written a new post "G-Fleur" 03.09.2022

      Thanks a lot for your interest in the G-FLEUR add on. Unfortunately, this code has not been maintained for several years and thus is not compatible with recent FLEUR versions. I hope I will find some time (sometime) to work on it again.

      Best regards

      Daniel

    • wortmann has written a new post "Magnetic field (b_field)" 01.05.2022

      Dear Dongwook,

      from what you describe (or I understand) your results should be fine. What is "missing" in our treatment of a Zeeman field is a contribution to the total energy. The total energy has basically two contributions. a) the sum of the eigenvalues and b) direct integrals over products of the charge densities and the potentials. As we include the Zeeman field in the potential only in a "post processing" step these (b) terms are not evaluated. But this is relevant for the total energy only, for example the magnetization - you are interested in- is evaluated from the eigenvectors which are calculated taking the Zeeman field into account. The missing term in the total energy does not change anything in the self-consistency. Actually, as long as the Zeeman field you add is constant in space also the missing M.B term can be evaluated "by hand" from the output ;-)

      Hope this helps
      Daniel

    • wortmann has written a new post "van der Waals correction in relaxation" 12.03.2021

      Dear Jiaqi,

      there exists a module implementing the approch by Guillermo Román-Pérez and José M. Soler(Phys. Rev. Lett. 103, 096102) into FLEUR. As far as I remember its status is the following:
      - Its in the source code (fleur_VDW.F90) but currently not called. So this module would have to be included (probably somewhere in the potential setup) again to be usable. I currently do not know anyone in our team with spare time to do this, but of course you are very welcome to look at it. If you plan to do so and need additional help, please open a corresponding gitlab issue.
      - It can be used to calculate a vdW contribution to the total energy and hence your idea of varying the spacing could be feasible.
      - It also can be used to calculate a contribution to the potential which could be included in the SCF cycle. However, I would believe that it should also lead to additional force contributions if used in a relaxation that are not implemented.

      Hope this helps
      Daniel

    • wortmann has written a new post "Question about Force theorem calculations + MPI parallelization" 11.15.2021

      There is obviously a bug in the IO for the MPI case. Could you please create an issue for this bug of iffgit.fz-juelich.de/fleur/fleur uploading the input and the details of how you run in parallel.
      Thanks a lot.

      Daniel

    • wortmann has written a new post "Fermi Surface Calculation" 11.11.2021

      I am sorry but I do not think that we have a publically available tool for this task. As a general comment you should create a banddos.hdf file with many k-points and then use a suitable visualization framework.

      Hope this helps

      Daniel

    • wortmann has written a new post "Tutorials/Examples input files" 11.09.2021

      As described here: https://www.flapw.de/MaX-5.1/tutorial_docker/ you have two options to access the tutorial with all files. a) use docker/podman and run the image we provide or b) download the tar-file with the html version of the tutorial. In both cases the input files are also provided.

      Hope this helps,

      Daniel

    • wortmann has written a new post "error with MPI parallelization" 11.08.2021

      The problem seems to be that the MPI Library does not handle the one-sided communication correctly. One could try to run fleur with the '-disable_progress_thread' option which hopefully fixes this. Otherwise you could try to use the parallel hdf5 library instead of keeping the data in memory by running with '-eig66 hdf5'.

      Perhaps it is also possible to persuade the MPI to allow RMA even over a "standard" (ethernet?) network. There might be installations options your system guru could known.

      Hope this helps,

      Daniel

    • wortmann has written a new post "MPI_Abort" 10.29.2021

      Without further investigation my guess here would be that the crash actually occurs in the diagonalization. You use 24MPI processes for 32 k-points if I am not mistaken. Hence, you do 8-kpoints in parallel and use 3PE for eigenvalue parallelism. This is not recommended on our cluster. I would suggest to use 8PE for MPI and 3OMP threads instead.

      Hope this helps

      Daniel

    • wortmann has written a new post "MPI_Abort" 10.28.2021

      OK, the code crashes. Of course we can not really provide useful guidance here except wild guesses :-). You will need to provide more information. Typically these info can be useful.
      - what is your input in detail (giving the inp.xml would help)
      - how did you start the code. How many MPI processes on how many nodes. What about OpenMP?
      - how long did the calculation run before crashing.

      Hope this helps,
      Daniel

    • wortmann has written a new post "Convergence Problem" 10.26.2021

      After a (only very brief, so perhaps I missed something) view of your input I am pretty sure that your structure is wrong. The MT-radii are too small indicating that you perhaps used Angstroem instead of a.u. for your lattice or something similar. So please verify that the structure you are using is correct.

      Hope this helps,

      Daniel

    • wortmann has written a new post "k-point path" 10.20.2021

      Regarding the HDF5 issue I would like to add that the git-version of the code (i.e. the source obtained by cloning from iffgit) should also allow to include the download and compilation of a HDF5 version within the build process of FLEUR itself. This is done by using the option "-hdf5 true" for the configure.sh script.

      Hope this helps
      Daniel

Recipient
wortmann
Subject:


text:

Sign up, to leave a comment


Xobor Einfach ein eigenes Xobor Forum erstellen
Datenschutz