This is the user forum of the DFT FLAPW code FLEUR. It is meant to be used as a place to asks questions to other users and the developers, to provide feedback and suggestions. The documentation of the code can be found at the FLEUR homepage.
Would you mind please telling me how can I add one electron into system specially on specific ion in inp.xml? Could you please tell me how can I modify the inp.xml file in this regard?
14 lines for spin_up of one atom. I have two questions: 1) why code is occupying the outer column (0.9683307845959) which it should be zero in principle? 2) The sequence of filling d-orbital seems to be like: column3-row3, coulmn5-row5, column7-row7, column2-row10, coulmn4-row12. Could you please tell me which one is dx2-y2 or dz2 or dxy, dxz and dyz?
I am seeking clarification on the generation of the n_mmp_mat_out file in LDA+U calculations. Specifically, for a system with two Iron (Fe) atoms, this file contains 56 lines and is organized in a 7x7 matrix format. I would like to understand how this file is structured and written. Additionally, I am interested in knowing how the density matrix information extracted from the out.xml file can be utilized to interpret or manipulate the n_mmp_mat_out file.
I would like to calculate the magnetic properties of collinear magnets. In the experiment, it is a semiconductor, but in DFT, even after applying the U-parameter, it is metal. I found that the density matrix for the U parameter is wrong. For example, I found that one special d-sublevel for the spin minority should be fully occupied to receive the insulator ground state. Could you please tell me how I can start from different n_mmp_mat_out files? I know that its dimension is (2l+1) * (2l+1). Could you please tell me more information about each column? For example, if l = 2, it is 7*7.
For a thin film setup, the list of atoms should be arranged such that the film is centered around the (x,y,0) plane. But for the below condition, it is impossible to put z=0 for magnetic ions: #-------lattice vectors------------- 3.335691 0.000000 0.000000 -0.035785 3.814516 0.000000 0.000000 0.000000 31.597437 #-------Positions------------------- atom1 0.750000 0.750000 0.416122 atom1 0.250000 0.250000 0.583878 atom2 0.250000 0.250000 0.488096 atom2 0.750000 0.750000 0.511904 magnetic-atom 0.250000 0.750000 0.470494 magnetic-atom 0.750000 0.250000 0.529506 Could you please tell me how one can solve the problem for this geometry?
I know how to run the code and so on. But I have a practical question. Could you please tell me what is the optimum way to run 21 kpts? Suppose one can consider 16 cores per node. Considering one node with 16 cores is the optimum way or there is another alternative? the kpt mesh is 6*7*1.
I tried to run /local/th1/DFT/fleur_MPI_MaXR5_th1 on a node of the th1 partitions and got a Segmentation Fault and an MPI_Abort. The node log does not show any out-of-memory errors for this run. This is the Slurm error log:
I/O warning : failed to load external entity "relax.xml" Signal 11 detected on PE: 9 This might be due to either: - A bug - Your job running out of memory - Your job got killed externally (e.g. no cpu-time left) - .... Please check and report if you believe you found a bug Abort(0) on node 9 (rank 9 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 0) - process 9 Signal 11 detected on PE: 6 This might be due to either: - A bug - Your job running out of memory - Your job got killed externally (e.g. no cpu-time left) - .... Please check and report if you believe you found a bug Abort(0) on node 6 (rank 6 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 0) - process 6 Signal 11 detected on PE: 7 This might be due to either: - A bug - Your job running out of memory - Your job got killed externally (e.g. no cpu-time left) - .... Please check and report if you believe you found a bug Abort(0) on node 7 (rank 7 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 0) - process 7 Signal 11 detected on PE: 8 This might be due to either: - A bug - Your job running out of memory - Your job got killed externally (e.g. no cpu-time left) - .... Please check and report if you believe you found a bug Abort(0) on node 8 (rank 8 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 0) - process 8 Signal 11 detected on PE: 10 This might be due to either: - A bug - Your job running out of memory - Your job got killed externally (e.g. no cpu-time left) - .... Please check and report if you believe you found a bug Abort(0) on node 10 (rank 10 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 0) - process 10 Signal 11 detected on PE: 11 This might be due to either: - A bug - Your job running out of memory - Your job got killed externally (e.g. no cpu-time left) - .... Please check and report if you believe you found a bug Abort(0) on node 11 (rank 11 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 0) - process 11 Signal Signal Signal Signal Signal Signal slurmstepd: error: *** STEP 3218157.0 ON iffcluster0702 CANCELLED AT 2021-10-27T10:26:05 *** srun: Job step aborted: Waiting up to 32 seconds for job step to finish. srun: error: iffcluster0702: task 7: Killed srun: error: iffcluster0702: tasks 1-2,4,6,10-11: Killed srun: error: iffcluster0702: tasks 0,3,5,8-9: Killed
For the primitive cell (22 atoms), it works pretty well. But for the supercell (88 atoms) has such this error. Could you please tell me how can I solve this error?
I am started to calculate SOC for a thin film with a collinear magnetic configuration. Without SOC, I could converge the ground state in GGA+U. When I turn on the SOC flag, I receive the following error:
Error message: k-point set is not compatible to missing time-reversal symmetry in calculation. Error occurred in subroutine: gen_bz Error from PE:0/1
Could you please tell me how can I solve this error?