|
Page 1 of 2
« Page
1
2
Page »
-
| |
|
Hi Gregor
My bad. Yes, I must have had a stray density. Cleaning up an rerunning now seems to be working fine.
Ciao Terry |
-
| |
|
I am getting the above error.
It's a larger film calculation than what I have done before (286 atoms in the original cell), so I'm wondering are there hard-coded size limits in 6.0? |
-
| |
|
Yes, that's great. Thanks Gregor. |
-
| |
|
Hi folks
In out.xml can I find something like site-projected occupations for valence states? If not, what post-processing do I need to do? I want to distinguish between e.g. Ti(III) and Ti(IV) in the converged density for a complex structure.
Ciao Terry |
-
| |
|
Hi Gregor
Can I suggest that the wording of that message might be tweaked a bit? In hindsight I can see what it means, but when I first read it in this thread I had no idea what the touch message meant (even as a full-time linux/unix user for more than two decades! )
Ciao Terry |
-
| |
|
Hi folks
Does anyone care to comment about accessing "higher rungs" with fleur?
Ciao |
-
| |
|
Hi folks
I want to run (3D PBC) M06L and HSE06 calculations. How do I do that?
I have found some workshop slides from 2019 saying that e.g. putting in a GGA XC potential works. Setting up LibXCName in xcFunctional in inp.xml in the obvious way crashes out: MetaGGA is not implemented for potentials. What is the right way?
Ciao Terry |
-
| |
|
Hi folks
I see that MaX-6.0 has been released.
Does a concise summary exist of the major differences between 5.1 and 6.0?
Ciao Terry |
-
| |
|
Hi Gregor
Thanks for the advice. I will explore those options.
Ciao Terry |
-
| |
|
In out.xml we have: <basis nvd="23313" lmaxd="14" nlotot="0"/>
I didn't specify any options for the diagonalisation when building. The configuration output tells me that SCALAPACK was found (via MKL), but nothing else that looks like diagonalisation. Is that the info you want? |
-
| |
|
Here's juDFT_times.json (from a slightly different geometry). Run with 32 MPI processes, each allocated 9 cores (9 OpenMP threads) on 28 core broadwell nodes. So up to three MPI processes per node, 11 nodes. |
-
| |
|
Hi folks
I am continuing my investigation of ZNO slabs. For the attached inputs I have 32 k-points, so I run with 32 MPI processes. For similar (but different) slabs on the machine I am running on, I can get good CPU utilisation if I assign up to 14 cores to each MPI process. (FWIW, the next compatible graining puts 1 process on each 28 core node, which does not perform well.)
For the attached job I don't seem to be able to get more than around 5-6 cores working effectively on each k-point. Trying to give fleur more cores does not decrease wall times. Is there something wrong with the way I am setting this up, or am I just hitting some inherent scaling limit within fleur for the peculiarities of this job? (All this with 5.1 compiled with OpenMPI 4.1 & Intel 2019.5 compilers.)
Ciao Terry |
-
| |
|
Hi folks
Somewhat inspired by the previous thread on simulating XPS, I am calculating the O 1s core levels for surface species of a metal oxide.
Mostly it's straight-forward, and the results converge quite quickly with respect to the thickness of the slab. However, when I decorate the surface oxides with hydrogen atoms, the convergence slows dramatically. Whenever I make a thicker slab and relax, the O 1s core levels of lattice oxygen in the centre of the slab increase. That makes it impossible to claim the surface oxygen levels are well-defined relative to the bulk lattice oxygen that I'd like to use as reference.
If I have bare oxide surfaces or surfaces decorated with water this does not happen.
I am using a consistent set of MT parameters and basis cutoffs somewhat increased from the default. What am I missing here?
Ciao Terry |
-
| |
|
I wonder might I get someone to comment on where the state occupations come from in inpgen? If I look at the output for e.g. Fe-Cu, I see the curious 3d occupations:
Fe: 1.2/1.2, 2.9/0.7 Co: 1.4/1.4, 2.9/1.3 Ni: 1.6/1.6, 2.95/1.85 Cu: 0.5/0.5 (4s!)
The only concrete thing I can see is that they add up to the right number of electrons. Where have these numbers come from? What are the implications of changing them in inp.xml? |
-
| |
|
Worked a treat. Thanks Gregor! |
-
| |
|
Hi folks
I have hit the error in the subject line when looking at ZnO slabs. FYI, I include the inpgen input as an attachment. (FWIW, I don't see anything special about this slab that might make me think it's hard. )
The only relevant thing I can get Professor Google to link me to is a 2 month old, closed bug report from Daniel Wortmann (#609). Comments there suggest that this either has been fixed, will likely be fixed soon or at least has work-arounds.
Is there e.g. a git branch or code changes I should try to get around this issue? |
-
| |
|
Zitat von JensB im Beitrag #4 Therefore for a real trajectory, the struct-relax.xsf alone won't help you.
But struct-relax.xsf has all the information needed to reproduce the geometry. You don't directly have symmetry, but I don't see why that's not enough for most purposes. I have my own tools to stitch such things together, with permutations & shifts if necessary. Fortran & awk are my hammers, so I suspect you don't want my code. |
-
| |
|
Hi Gregor
Fear not! I was using the term trajectory in the loosest way possible. struct-relax.xsf is good enough for my purposes, thanks. |
-
| |
|
Does anyone have any suggestions for tools to extract the trajectory out of a structural optimisation? I could hack together some tools to pull info out of relax.xml and sym.xml to reconstruct a full set of Cartesian coordinates, but surely there are already suitable tools around... |
-
| |
|
Some guesses:
If you have 5 cores and are trying to run 1 MPI process per core, ensure that the environment variable OMP_NUM_THREADS has the value 1.
If you have 5 nodes each of six actual cores (12 virtual hyperthread cores) then ensure OMP_NUM_THREADS is 6.
If you have 5 nodes each of 12 actual cores then it may be that mpirun is not distributing your processes correctly across nodes. |
Page 1 of 2
« Page
1
2
Page »
|