pyiron.base.job.generic module

class pyiron.base.job.generic.GenericError(job)[source]

Bases: object

print_message(string='')[source]
print_queue(string='')[source]
class pyiron.base.job.generic.GenericJob(project, job_name)[source]

Bases: pyiron.base.job.core.JobCore

Generic Job class extends the JobCore class with all the functionality to run the job object. From this class all specific Hamiltonians are derived. Therefore it should contain the properties/routines common to all jobs. The functions in this module should be as generic as possible.

Parameters
  • project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in

  • job_name (str) – name of the job, which has to be unique within the project

.. attribute:: job_name

name of the job, which has to be unique within the project

.. attribute:: status
execution status of the job, can be one of the following [initialized, appended, created, submitted,

running, aborted, collect, suspended, refresh, busy, finished]

.. attribute:: job_id

unique id to identify the job in the pyiron database

.. attribute:: parent_id

job id of the predecessor job - the job which was executed before the current one in the current job series

.. attribute:: master_id

job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.

.. attribute:: child_ids

list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master

.. attribute:: project

Project instance the jobs is located in

.. attribute:: project_hdf5

ProjectHDFio instance which points to the HDF5 file the job is stored in

.. attribute:: job_info_str

short string to describe the job by it is job_name and job ID - mainly used for logging

.. attribute:: working_directory

working directory of the job is executed in - outside the HDF5 file

.. attribute:: path

path to the job as a combination of absolute file system path and path within the HDF5 file.

.. attribute:: version

Version of the hamiltonian, which is also the version of the executable unless a custom executable is used.

.. attribute:: executable

Executable used to run the job - usually the path to an external executable.

.. attribute:: library_activated

For job types which offer a Python library pyiron can use the python library instead of an external executable.

.. attribute:: server

Server object to handle the execution environment for the job.

.. attribute:: queue_id

the ID returned from the queuing system - it is most likely not the same as the job ID.

.. attribute:: logger

logger object to monitor the external execution and internal pyiron warnings.

.. attribute:: restart_file_list

list of files which are used to restart the calculation from these files.

.. attribute:: exclude_nodes_hdf

list of nodes which are excluded from storing in the hdf5 file.

.. attribute:: exclude_groups_hdf

list of groups which are excluded from storing in the hdf5 file.

.. attribute:: job_type
Job type object with all the available job types: [‘ExampleJob’, ‘SerialMaster’, ‘ParallelMaster’,

‘ScriptJob’, ‘ListMaster’]

append(job)[source]

Metajobs like GenericMaster, ParallelMaster, SerialMaser or ListMaster allow other jobs to be appended. In the GenericJob definition this is only a template function.

check_setup()[source]

Checks whether certain parameters (such as plane wave cutoff radius in DFT) are changed from the pyiron standard values to allow for a physically meaningful results. This function is called manually or only when the job is submitted to the queueing system.

clear_job()[source]

Convenience function to clear job info after suspend. Mimics deletion of all the job info after suspend in a local test environment.

collect_logfiles()[source]

Collect the log files of the external executable and store the information in the HDF5 file. This method has to be implemented in the individual hamiltonians.

collect_output()[source]

Collect the output files of the external executable and store the information in the HDF5 file. This method has to be implemented in the individual hamiltonians.

convergence_check()[source]

Validate the convergence of the calculation.

Returns

If the calculation is converged

Return type

(bool)

copy()[source]

Copy the GenericJob object which links to the job and its HDF5 file

Returns

New GenericJob object pointing to the same job

Return type

GenericJob

copy_file_to_working_directory(file)[source]

Copy a specific file to the working directory before the job is executed.

Parameters

file (str) – path of the file to be copied.

copy_template(project, new_job_name=None)[source]

Copy the content of the job including the HDF5 file but without the output data to a new location

Parameters
  • project (ProjectHDFio) – project to copy the job to

  • new_job_name (str) – to duplicate the job within the same porject it is necessary to modify the job name - optional

Returns

GenericJob object pointing to the new location.

Return type

GenericJob

copy_to(project=None, new_job_name=None, input_only=False, new_database_entry=True)[source]

Copy the content of the job including the HDF5 file to a new location.

Parameters
  • project (ProjectHDFio) – The project to copy the job to. (Default is None, use the same project.)

  • new_job_name (str) – The new name to assign the duplicate job. Required if the project is None or the same project as the copied job. (Default is None, try to keep the same name.)

  • input_only (bool) – [True/False] Whether to copy only the input. (Default is False.)

  • new_database_entry (bool) – [True/False] Whether to create a new database entry. If input_only is True then new_database_entry is False. (Default is True.)

Returns

GenericJob object pointing to the new location.

Return type

GenericJob

create_job(job_type, job_name)[source]

Create one of the following jobs: - ‘StructureContainer’: - ‘StructurePipeline’: - ‘AtomisticExampleJob’: example job just generating random number - ‘ExampleJob’: example job just generating random number - ‘Lammps’: - ‘KMC’: - ‘Sphinx’: - ‘Vasp’: - ‘GenericMaster’: - ‘SerialMaster’: series of jobs run in serial - ‘AtomisticSerialMaster’: - ‘ParallelMaster’: series of jobs run in parallel - ‘KmcMaster’: - ‘ThermoLambdaMaster’: - ‘RandomSeedMaster’: - ‘MeamFit’: - ‘Murnaghan’: - ‘MinimizeMurnaghan’: - ‘ElasticMatrix’: - ‘ConvergenceVolume’: - ‘ConvergenceEncutParallel’: - ‘ConvergenceKpointParallel’: - ’PhonopyMaster’: - ‘DefectFormationEnergy’: - ‘LammpsASE’: - ‘PipelineMaster’: - ’TransformationPath’: - ‘ThermoIntEamQh’: - ‘ThermoIntDftEam’: - ‘ScriptJob’: Python script or jupyter notebook job container - ‘ListMaster’: list of jobs

Parameters
  • job_type (str) – job type can be [‘StructureContainer’, ‘StructurePipeline’, ‘AtomisticExampleJob’, ‘ExampleJob’, ‘Lammps’, ‘KMC’, ‘Sphinx’, ‘Vasp’, ‘GenericMaster’, ‘SerialMaster’, ‘AtomisticSerialMaster’, ‘ParallelMaster’, ‘KmcMaster’, ‘ThermoLambdaMaster’, ‘RandomSeedMaster’, ‘MeamFit’, ‘Murnaghan’, ‘MinimizeMurnaghan’, ‘ElasticMatrix’, ‘ConvergenceVolume’, ‘ConvergenceEncutParallel’, ‘ConvergenceKpointParallel’, ’PhonopyMaster’, ‘DefectFormationEnergy’, ‘LammpsASE’, ‘PipelineMaster’, ’TransformationPath’, ‘ThermoIntEamQh’, ‘ThermoIntDftEam’, ‘ScriptJob’, ‘ListMaster’]

  • job_name (str) – name of the job

Returns

job object depending on the job_type selected

Return type

GenericJob

create_pipeline(step_lst)[source]

Create a job pipeline

Parameters

step_lst (list) – List of functions which create calculations

Returns

Return type

FlexibleMaster

db_entry()[source]

Generate the initial database entry for the current GenericJob

Returns

database dictionary {“username”, “projectpath”, “project”, “job”, “subjob”, “hamversion”,

”hamilton”, “status”, “computer”, “timestart”, “masterid”, “parentid”}

Return type

(dict)

drop_status_to_aborted()[source]

Change the job status to aborted when the job was intercepted.

property exclude_groups_hdf

Get the list of groups which are excluded from storing in the hdf5 file

Returns

groups(list)

property exclude_nodes_hdf

Get the list of nodes which are excluded from storing in the hdf5 file

Returns

nodes(list)

property executable

Get the executable used to run the job - usually the path to an external executable.

Returns

exectuable path

Return type

(str/pyiron.base.job.executable.Executable)

from_hdf(hdf=None, group_name=None)[source]

Restore the GenericJob from an HDF5 file

Parameters
  • hdf (ProjectHDFio) – HDF5 group object - optional

  • group_name (str) – HDF5 subgroup name - optional

interactive_close()[source]

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job. After the interactive execution, the job can be closed using the interactive_close function.

interactive_fetch()[source]

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job. To access the output data during the execution the interactive_fetch function is used.

interactive_flush(path='generic', include_last_step=True)[source]

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job. To write the interactive cache to the HDF5 file the interactive flush function is used.

job_file_name(file_name, cwd=None)[source]

combine the file name file_name with the path of the current working directory

Parameters
  • file_name (str) – name of the file

  • cwd (str) – current working directory - this overwrites self.project_hdf5.working_directory - optional

Returns

absolute path to the file in the current working directory

Return type

str

property job_type
[‘ExampleJob’, ‘SerialMaster’, ‘ParallelMaster’, ‘ScriptJob’,

‘ListMaster’]

Returns

Job type object

Return type

JobTypeChoice

Type

Job type object with all the available job types

kill()[source]
list_all()[source]

List all groups and nodes of the HDF5 file - where groups are equivalent to directories and nodes to files.

Returns

{‘groups’: [list of groups], ‘nodes’: [list of nodes]}

Return type

dict

property logger

Get the logger object to monitor the external execution and internal pyiron warnings.

Returns

logger object

Return type

logging.getLogger()

property queue_id

Get the queue ID, the ID returned from the queuing system - it is most likely not the same as the job ID.

Returns

queue ID

Return type

int

refresh_job_status()[source]

Refresh job status by updating the job status with the status from the database if a job ID is available.

remove_child()[source]

internal function to remove command that removes also child jobs. Do never use this command, since it will destroy the integrity of your project.

reset_job_id(job_id=None)[source]

Reset the job id sets the job_id to None in the GenericJob as well as all connected modules like JobStatus.

restart(job_name=None, job_type=None)[source]

Create an restart calculation from the current calculation - in the GenericJob this is the same as create_job(). A restart is only possible after the current job has finished. If you want to run the same job again with different input parameters use job.run(run_again=True) instead.

Parameters
  • job_name (str) – job name of the new calculation - default=<job_name>_restart

  • job_type (str) – job type of the new calculation - default is the same type as the exeisting calculation

Returns:

property restart_file_dict

A dictionary of the new name of the copied restart files

property restart_file_list

Get the list of files which are used to restart the calculation from these files.

Returns

list of files

Return type

list

run(run_again=False, repair=False, debug=False, run_mode=None)[source]

This is the main run function, depending on the job status [‘initialized’, ‘created’, ‘submitted’, ‘running’, ‘collect’,’finished’, ‘refresh’, ‘suspended’] the corresponding run mode is chosen.

Parameters
  • run_again (bool) – Delete the existing job and run the simulation again.

  • repair (bool) – Set the job status to created and run the simulation again.

  • debug (bool) – Debug Mode - defines the log level of the subprocess the job is executed in.

  • run_mode (str) – [‘modal’, ‘non_modal’, ‘queue’, ‘manual’] overwrites self.server.run_mode

run_if_interactive()[source]

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job.

run_if_interactive_non_modal()[source]

For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job.

run_if_manually(_manually_print=True)[source]

The run if manually function is called by run if the user decides to execute the simulation manually - this might be helpful to debug a new job type or test updated executables.

Parameters

_manually_print (bool) – Print explanation how to run the simulation manually - default=True.

run_if_modal()[source]

The run if modal function is called by run to execute the simulation, while waiting for the output. For this we use subprocess.check_output()

run_if_non_modal()[source]

The run if non modal function is called by run to execute the simulation in the background. For this we use multiprocessing.Process()

run_if_refresh()[source]

Internal helper function the run if refresh function is called when the job status is ‘refresh’. If the job was suspended previously, the job is going to be started again, to be continued.

run_if_scheduler()[source]

The run if queue function is called by run if the user decides to submit the job to and queing system. The job is submitted to the queuing system using subprocess.Popen()

Returns

Returns the queue ID for the job.

Return type

int

run_static()[source]

The run static function is called by run to execute the simulation.

save()[source]

Save the object, by writing the content to the HDF5 file and storing an entry in the database.

Returns

Job ID stored in the database

Return type

(int)

send_to_database()[source]

if the jobs should be store in the external/public database this could be implemented here, but currently it is just a placeholder.

property server

Get the server object to handle the execution environment for the job.

Returns

server object

Return type

Server

set_input_to_read_only()[source]

This function enforces read-only mode for the input classes, but it has to be implement in the individual classes.

signal_intercept(sig, frame)[source]
Parameters
  • sig

  • frame

Returns:

suspend()[source]

Suspend the job by storing the object and its state persistently in HDF5 file and exit it.

to_hdf(hdf=None, group_name=None)[source]

Store the GenericJob in an HDF5 file

Parameters
  • hdf (ProjectHDFio) – HDF5 group object - optional

  • group_name (str) – HDF5 subgroup name - optional

transfer_from_remote(delete_remote=True)[source]
update_master()[source]

After a job is finished it checks whether it is linked to any metajob - meaning the master ID is pointing to this jobs job ID. If this is the case and the master job is in status suspended - the child wakes up the master job, sets the status to refresh and execute run on the master job. During the execution the master job is set to status refresh. If another child calls update_master, while the master is in refresh the status of the master is set to busy and if the master is in status busy at the end of the update_master process another update is triggered.

validate_ready_to_run()[source]

Validate that the calculation is ready to be executed. By default no generic checks are performed, but one could check that the input information is complete or validate the consistency of the input at this point.

property version

Get the version of the hamiltonian, which is also the version of the executable unless a custom executable is used.

Returns

version number

Return type

str

property working_directory

Get the working directory of the job is executed in - outside the HDF5 file. The working directory equals the path but it is represented by the filesystem:

/absolute/path/to/the/file.h5/path/inside/the/hdf5/file

becomes:

/absolute/path/to/the/file_hdf5/path/inside/the/hdf5/file

Returns

absolute path to the working directory

Return type

str

write_input()[source]

Write the input files for the external executable. This method has to be implemented in the individual hamiltonians.

pyiron.base.job.generic.multiprocess_wrapper(job_id, working_dir, debug=False)[source]