Posts: 41
| Last online: 01.06.2023
-
-
-
Dear Farhan,
indeed, there seems to be a problem with the automatic download and compilation of the wannier90 library. I created a corresponding issue which you might want to follow: https://iffgit.fz-juelich.de/fleur/fleur/-/issues/709
For the time being, you should download and compile the wannier90 library independent of FLEUR and make it available by specifying the correct linker flags to the configure script as explained e.g. in our interactive tutorial.
Hope this helps
Daniel
-
-
It was pointed out to us that the issue was reported in the FLEUR-AIIDA github. I transfered it to the FLEUR issue tracker now.
-
-
Dear Farhan,
thanks a lot for your report. This looks very much like a bug (actually might be a problem we are trying to track for some time). Could you please fill in an issue on https://iffgit.fz-juelich.de/fleur/fleur/-/issues/new Please use the template for bugs it will give you some idea which info to provide.
Then we can have look at the problem in detail.
Hope this helps Daniel
-
-
With very high probability, your problem is a too small stackspace. FLEUR uses a significant amount of automatic variables that the compilers by default put on the stack. Hence, we need a large memory for this. Therefore, we propose to issue the command 'ulimit -s unlimited' before executing FLEUR. On Linux also a corresponding warning should be issued to the console in your setting.
Hope this helps
Daniel
-
-
I am sorry, but I can not reproduce the segfault. Could you:
- send the output of "ulimit -s" on your compute node - run the code with "-debugtime" and past the last lines of output before the segfault.
Daniel
-
-
Hi,
Of course a segmentation fault is in most cases the sign of a bug. I would claim there is one exception: insufficient memory. To give more advice the following info would be helpful:
- which Version of FLEUR are you using? - on which kind of machine? OS, compiler etc - is the stacksize set to "unlimited" in the shell? - are you using the inp.xml unmodified from inpgen or do you change anything?
Hope this helps, Daniel
-
-
You in addition have to update the image file. If you do not use the script, please issue the command `docker pull judft/future.noAiiDA`.
Hope this helps
Daniel
-
-
-
Thanks a lot for your interest in the G-FLEUR add on. Unfortunately, this code has not been maintained for several years and thus is not compatible with recent FLEUR versions. I hope I will find some time (sometime) to work on it again.
Best regards
Daniel
-
-
Dear Dongwook,
from what you describe (or I understand) your results should be fine. What is "missing" in our treatment of a Zeeman field is a contribution to the total energy. The total energy has basically two contributions. a) the sum of the eigenvalues and b) direct integrals over products of the charge densities and the potentials. As we include the Zeeman field in the potential only in a "post processing" step these (b) terms are not evaluated. But this is relevant for the total energy only, for example the magnetization - you are interested in- is evaluated from the eigenvectors which are calculated taking the Zeeman field into account. The missing term in the total energy does not change anything in the self-consistency. Actually, as long as the Zeeman field you add is constant in space also the missing M.B term can be evaluated "by hand" from the output ;-)
Hope this helps Daniel
-
-
Dear Jiaqi,
there exists a module implementing the approch by Guillermo Román-Pérez and José M. Soler(Phys. Rev. Lett. 103, 096102) into FLEUR. As far as I remember its status is the following: - Its in the source code (fleur_VDW.F90) but currently not called. So this module would have to be included (probably somewhere in the potential setup) again to be usable. I currently do not know anyone in our team with spare time to do this, but of course you are very welcome to look at it. If you plan to do so and need additional help, please open a corresponding gitlab issue. - It can be used to calculate a vdW contribution to the total energy and hence your idea of varying the spacing could be feasible. - It also can be used to calculate a contribution to the potential which could be included in the SCF cycle. However, I would believe that it should also lead to additional force contributions if used in a relaxation that are not implemented.
Hope this helps Daniel
-
-
There is obviously a bug in the IO for the MPI case. Could you please create an issue for this bug of iffgit.fz-juelich.de/fleur/fleur uploading the input and the details of how you run in parallel. Thanks a lot.
Daniel
-
-
I am sorry but I do not think that we have a publically available tool for this task. As a general comment you should create a banddos.hdf file with many k-points and then use a suitable visualization framework.
Hope this helps
Daniel
-
-
As described here: https://www.flapw.de/MaX-5.1/tutorial_docker/ you have two options to access the tutorial with all files. a) use docker/podman and run the image we provide or b) download the tar-file with the html version of the tutorial. In both cases the input files are also provided.
Hope this helps,
Daniel
-
-
The problem seems to be that the MPI Library does not handle the one-sided communication correctly. One could try to run fleur with the '-disable_progress_thread' option which hopefully fixes this. Otherwise you could try to use the parallel hdf5 library instead of keeping the data in memory by running with '-eig66 hdf5'.
Perhaps it is also possible to persuade the MPI to allow RMA even over a "standard" (ethernet?) network. There might be installations options your system guru could known.
Hope this helps,
Daniel
-
-
Without further investigation my guess here would be that the crash actually occurs in the diagonalization. You use 24MPI processes for 32 k-points if I am not mistaken. Hence, you do 8-kpoints in parallel and use 3PE for eigenvalue parallelism. This is not recommended on our cluster. I would suggest to use 8PE for MPI and 3OMP threads instead.
Hope this helps
Daniel
-
-
OK, the code crashes. Of course we can not really provide useful guidance here except wild guesses :-). You will need to provide more information. Typically these info can be useful. - what is your input in detail (giving the inp.xml would help) - how did you start the code. How many MPI processes on how many nodes. What about OpenMP? - how long did the calculation run before crashing.
Hope this helps, Daniel
-
-
After a (only very brief, so perhaps I missed something) view of your input I am pretty sure that your structure is wrong. The MT-radii are too small indicating that you perhaps used Angstroem instead of a.u. for your lattice or something similar. So please verify that the structure you are using is correct.
Hope this helps,
Daniel
-
-
Regarding the HDF5 issue I would like to add that the git-version of the code (i.e. the source obtained by cloning from iffgit) should also allow to include the download and compilation of a HDF5 version within the build process of FLEUR itself. This is done by using the option "-hdf5 true" for the configure.sh script.
Hope this helps Daniel
|