This is the user forum of the DFT FLAPW code FLEUR. It is meant to be used as a place to asks questions to other users and the developers, to provide feedback and suggestions. The documentation of the code can be found at the FLEUR homepage.

indeed, there seems to be a problem with the automatic download and compilation of the wannier90 library. I created a corresponding issue which you might want to follow: https://iffgit.fz-juelich.de/fleur/fleur/-/issues/709

For the time being, you should download and compile the wannier90 library independent of FLEUR and make it available by specifying the correct linker flags to the configure script as explained e.g. in our interactive tutorial.

thanks a lot for your report. This looks very much like a bug (actually might be a problem we are trying to track for some time). Could you please fill in an issue on https://iffgit.fz-juelich.de/fleur/fleur/-/issues/new Please use the template for bugs it will give you some idea which info to provide.

With very high probability, your problem is a too small stackspace. FLEUR uses a significant amount of automatic variables that the compilers by default put on the stack. Hence, we need a large memory for this. Therefore, we propose to issue the command 'ulimit -s unlimited' before executing FLEUR. On Linux also a corresponding warning should be issued to the console in your setting.

Of course a segmentation fault is in most cases the sign of a bug. I would claim there is one exception: insufficient memory. To give more advice the following info would be helpful:

- which Version of FLEUR are you using? - on which kind of machine? OS, compiler etc - is the stacksize set to "unlimited" in the shell? - are you using the inp.xml unmodified from inpgen or do you change anything?

Thanks a lot for your interest in the G-FLEUR add on. Unfortunately, this code has not been maintained for several years and thus is not compatible with recent FLEUR versions. I hope I will find some time (sometime) to work on it again.

from what you describe (or I understand) your results should be fine. What is "missing" in our treatment of a Zeeman field is a contribution to the total energy. The total energy has basically two contributions. a) the sum of the eigenvalues and b) direct integrals over products of the charge densities and the potentials. As we include the Zeeman field in the potential only in a "post processing" step these (b) terms are not evaluated. But this is relevant for the total energy only, for example the magnetization - you are interested in- is evaluated from the eigenvectors which are calculated taking the Zeeman field into account. The missing term in the total energy does not change anything in the self-consistency. Actually, as long as the Zeeman field you add is constant in space also the missing M.B term can be evaluated "by hand" from the output ;-)

there exists a module implementing the approch by Guillermo Román-Pérez and José M. Soler(Phys. Rev. Lett. 103, 096102) into FLEUR. As far as I remember its status is the following: - Its in the source code (fleur_VDW.F90) but currently not called. So this module would have to be included (probably somewhere in the potential setup) again to be usable. I currently do not know anyone in our team with spare time to do this, but of course you are very welcome to look at it. If you plan to do so and need additional help, please open a corresponding gitlab issue. - It can be used to calculate a vdW contribution to the total energy and hence your idea of varying the spacing could be feasible. - It also can be used to calculate a contribution to the potential which could be included in the SCF cycle. However, I would believe that it should also lead to additional force contributions if used in a relaxation that are not implemented.

The calculation of the DMI interaction actually tries to treat a Spin-spiral with SOC in order to obtain the effect of the SOC on E(q) from which the DMI can be extracted. Since the spin-spiral feature and the SOC feature are conflicting in a usual SCF iteration (SOC breaks the translational symmetry used in the generalized Bloch theorem for spin-spirals) this calculation is performed in perturbation theory (SOC as a perturbation of the spin-spiral) in a mode implemented analogously to the force-theorem modes. I am sorry that is seems to be broken and promise to have a look at the issue ASAP.

I am not 100% sure I get the point, but in the workflow the outlined here, the SOC effects are calculated from changes in the eigenvalue sum. The changes in eigenvalues can of course not simply be decomposed in contributions from individual atoms. What you can do, would be to "switch off" SOC for some atoms by e.g. using the "socscale" parameter and then investigate the changes you get. If you do that for many atoms/layers it of course can become a bit of work and these "contributions" you will get there will not simply add up to the full result. To get such a decomposition of the full SOC effect into additive terms for each atom, I think you would have to do e.g. 1st order perturbation theory in the SOC. Then the changes of the eigenvalues will be due to the expectation values of the changes of the SOC contribution to the Hamiltonian which can be decomposed into individual additive terms. We do this only for the "DMI" force theorem mode which does SOC on top of spin-spiral calculations using perturbation theory.

I am sorry, but I do not understand your request. In the bandstructure output, the information you want to plot is actually available. So it just a matter of your visualization skills to create a colorful bandstructure :-)

Just to add a few remarks to Gregor's extensive discussion: - This problem to solve the problem of the isolated atom (differ) is nearly always due to a "broken" potential, i.e. a potential that is far off. - You calculate an "asymmetric" film, i.e. a film without z-reflection or inversion symmetry. These setups are very prone to be difficult to converge. In particular, as Gregor already pointed out, initial charge fluctuations can be too large to use standard mixing parameters. While a symmetric film is more simple to simulate in most cases, you of course also have to keep stoichiometry in mind.

FLEUR actually calculates the core electron levels in each standard self-consistency cycle. Hence, it is relatively straight forward to obtain the core levels for a supercell with e.g. a defect for all core levels of the unit cells. As you said, the absolute values usually are of little use, but by comparison with a suitable reference system one can easily calculate shift which tend to be rather reliable. Please note in addition: Core levels are somewhat sensitive to the radial grid as given by the MT radius, the grid-points and the log. Increment. Hence, keep these values consistent when comparing energies!

FLEUR can also calculate partially occupied core configurations that can be used in a Slater transition state manner to obtain more realistic shifts by also including static screening effects.

This is an error I have not seen since we introduced OpenMP. As Gregor already said, there could be different options to proceed. a) there might be a "better" version of the INTEL MPI available (some versions of the intel tools have a compiler option -mt_mpi if I remember correctly) b) you could of course see if you can get a newer intel toolchain on your machine :-) c) you could be very brave, comment out the stop and see what happens. I actually do not 100% understand why MPI_THREAD_FUNNELED support (which we require and which states that only a single task will call the MPI) adds to MPI_THREAD_SINGLE (which you have and which will mean you have only one thread) in terms of the stuff we do.

You are trying a rather advanced calculation here. In particular, the l_relaxSQA as well as the l_mtNocoPot feature for such a system is probably not well studied yet. As these different magnetic systems you can describe with these features often are separated by very small energy differences one would expect convergence problems.

I would be worried about convergence in the sense that the "magnetic degrees" of freedom might converge much much slower that the charge. If this is the case you should see a situation in which the charge converges up to some point and then the magnetisation direction continues rotatating very slowly. I believe that this might be a fundamental problem of such calculations and we are currently thinking e.g. about better preconditioners to deal with those.

Of course if you converged to a distance of 1E-5 your convergence is rather already and the problem could also be unrelated to magnetism but be related e.g. to a small instability in the core solver. Perhaps your property of interest is actually converged already?