pyiron.base.job.core module

class pyiron.base.job.core.DatabaseProperties(job_dict=None)[source]

Bases: object

Access the database entry of the job

class pyiron.base.job.core.HDF5Content(project_hdf5)[source]

Bases: object

Access the HDF5 file of the job

class pyiron.base.job.core.JobCore(project, job_name)[source]

Bases: pyiron.base.generic.template.PyironObject

The JobCore the most fundamental pyiron job class. From this class the GenericJob as well as the reduced JobPath class are derived. While JobPath only provides access to the HDF5 file it is about one order faster.

Parameters:
  • project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in
  • job_name (str) – name of the job, which has to be unique within the project
.. attribute:: job_name

name of the job, which has to be unique within the project

.. attribute:: status
execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
aborted, collect, suspended, refresh, busy, finished]
.. attribute:: job_id

unique id to identify the job in the pyiron database

.. attribute:: parent_id

job id of the predecessor job - the job which was executed before the current one in the current job series

.. attribute:: master_id

job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.

.. attribute:: child_ids

list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master

.. attribute:: project

Project instance the jobs is located in

.. attribute:: project_hdf5

ProjectHDFio instance which points to the HDF5 file the job is stored in

.. attribute:: job_info_str

short string to describe the job by it is job_name and job ID - mainly used for logging

.. attribute:: working_directory

working directory of the job is executed in - outside the HDF5 file

.. attribute:: path

path to the job as a combination of absolute file system path and path within the HDF5 file.

check_if_job_exists(job_name=None, project=None)[source]

Check if a job already exists in an specific project.

Parameters:
  • job_name (str) – Job name (optional)
  • project (ProjectHDFio, Project) – Project path (optional)
Returns:

True / False

Return type:

(bool)

child_ids

list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master

Returns:list of child job ids
Return type:list
compress(files_to_compress=None)[source]

Compress the output files of a job object.

Parameters:files_to_compress (list) –
content
copy()[source]

Copy the JobCore object which links to the HDF5 file

Returns:New FileHDFio object pointing to the same HDF5 file
Return type:JobCore
copy_to(project, new_database_entry=True, copy_files=True)[source]

Copy the content of the job including the HDF5 file to a new location

Parameters:
  • project (ProjectHDFio) – project to copy the job to
  • new_database_entry (bool) – [True/False] to create a new database entry - default True
  • copy_files (bool) – [True/False] copy the files inside the working directory - default True
Returns:

JobCore object pointing to the new location.

Return type:

JobCore

database_entry
decompress()[source]

Decompress the output files of a compressed job object.

from_hdf(hdf=None, group_name='group')[source]

Restore object from hdf5 format - The function has to be implemented by the derived classes - usually the GenericJob class

Parameters:
  • hdf (ProjectHDFio) – Optional hdf5 file, otherwise self is used.
  • group_name (str) – Optional hdf5 group in the hdf5 file.
get(name)[source]

Internal wrapper function for __getitem__() - self[name]

Parameters:key (str, slice) – path to the data or key of the data object
Returns:data or data object
Return type:dict, list, float, int
get_from_table(path, name)[source]

Get a specific value from a pandas.Dataframe

Parameters:
  • path (str) – relative path to the data object
  • name (str) – parameter key
Returns:

the value associated to the specific parameter key

Return type:

dict, list, float, int

get_job_id(job_specifier=None)[source]

get the job_id for job named job_name in the local project path from database

Parameters:job_specifier (str, int) – name of the job or job ID
Returns:job ID of the job
Return type:int
get_pandas(name)[source]

Load a dictionary from the HDF5 file and display the dictionary as pandas Dataframe

Parameters:name (str) – HDF5 node name
Returns:The dictionary is returned as pandas.Dataframe object
Return type:pandas.Dataframe
id

Unique id to identify the job in the pyiron database - use self.job_id instead

Returns:job id
Return type:int
inspect(job_specifier)[source]

Inspect an existing pyiron object - most commonly a job - from the database

Parameters:job_specifier (str, int) – name of the job or job ID
Returns:Access to the HDF5 object - not a GenericJob object - use load() instead.
Return type:JobCore
is_compressed()[source]

Check if the job is already compressed or not.

Returns:[True/False]
Return type:bool
is_master_id(job_id)[source]

Check if the job ID job_id is the master ID for any child job

Parameters:job_id (int) – job ID of the master job
Returns:[True/False]
Return type:bool
is_self_archived()[source]
job_id

Unique id to identify the job in the pyiron database

Returns:job id
Return type:int
job_info_str

Short string to describe the job by it is job_name and job ID - mainly used for logging

Returns:job info string
Return type:str
job_name

Get name of the job, which has to be unique within the project

Returns:job name
Return type:str
list_all()[source]

List all groups and nodes of the HDF5 file - where groups are equivalent to directories and nodes to files.

Returns:{‘groups’: [list of groups], ‘nodes’: [list of nodes]}
Return type:dict
list_childs()[source]

List child jobs as JobPath objects - not loading the full GenericJob objects for each child

Returns:list of child jobs
Return type:list
list_files()[source]

List files inside the working directory

Parameters:extension (str) – filter by a specific extension
Returns:list of file names
Return type:list
list_groups()[source]

equivalent to os.listdirs (consider groups as equivalent to dirs)

Returns:list of groups in pytables for the path self.h5_path
Return type:(list)
list_nodes()[source]

List all groups and nodes of the HDF5 file

Returns:list of nodes
Return type:list
load(job_specifier, convert_to_object=True)[source]

Load an existing pyiron object - most commonly a job - from the database

Parameters:
  • job_specifier (str, int) – name of the job or job ID
  • convert_to_object (bool) – convert the object to an pyiron object or only access the HDF5 file - default=True accessing only the HDF5 file is about an order of magnitude faster, but only provides limited functionality. Compare the GenericJob object to JobCore object.
Returns:

Either the full GenericJob object or just a reduced JobCore object

Return type:

GenericJob, JobCore

load_object(convert_to_object=True, project=None)[source]

Load object to convert a JobPath to an GenericJob object.

Parameters:
  • convert_to_object (bool) – convert the object to an pyiron object or only access the HDF5 file - default=True accessing only the HDF5 file is about an order of magnitude faster, but only provides limited functionality. Compare the GenericJob object to JobCore object.
  • project (ProjectHDFio) – ProjectHDFio to load the object with - optional
Returns:

depending on convert_to_object

Return type:

GenericJob, JobPath

master_id

Get job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.

Returns:master id
Return type:int
move_to(project)[source]

Move the content of the job including the HDF5 file to a new location

Parameters:project (ProjectHDFio) – project to move the job to
Returns:JobCore object pointing to the new location.
Return type:JobCore
name

Get name of the job, which has to be unique within the project

Returns:job name
Return type:str
parent_id

Get job id of the predecessor job - the job which was executed before the current one in the current job series

Returns:parent id
Return type:int
path

Absolute path of the HDF5 group starting from the system root - combination of the absolute system path plus the absolute path inside the HDF5 file starting from the root group.

Returns:absolute path
Return type:str
project

Project instance the jobs is located in

Returns:project the job is located in
Return type:Project
project_hdf5

Get the ProjectHDFio instance which points to the HDF5 file the job is stored in

Returns:HDF5 project
Return type:ProjectHDFio
remove(_protect_childs=True)[source]

Remove the job - this removes the HDF5 file, all data stored in the HDF5 file an the corresponding database entry.

Parameters:_protect_childs (bool) – [True/False] by default child jobs can not be deleted, to maintain the consistency - default=True
remove_child()[source]

internal function to remove command that removes also child jobs. Do never use this command, since it will destroy the integrity of your project.

rename(new_job_name)[source]

Rename the job - by changing the job name

Parameters:new_job_name (str) – new job name
reset_job_id(job_id)[source]

The reset_job_id function has to be implemented by the derived classes - usually the GenericJob class

Parameters:job_id (int) –
save()[source]

The save function has to be implemented by the derived classes - usually the GenericJob class

self_archive()[source]
self_unarchive()[source]
show_hdf()[source]

Iterating over the HDF5 datastructure and generating a human readable graph.

status
Execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
aborted, collect, suspended, refresh, busy, finished]
Returns:status
Return type:(str/pyiron.base.job.jobstatus.JobStatus)
to_hdf(hdf=None, group_name='group')[source]

Store object in hdf5 format - The function has to be implemented by the derived classes - usually the GenericJob class

Parameters:
  • hdf (ProjectHDFio) – Optional hdf5 file, otherwise self is used.
  • group_name (str) – Optional hdf5 group in the hdf5 file.
to_object(object_type=None, **qwargs)[source]

Load the full pyiron object from an HDF5 file

Parameters:
  • object_type – if the ‘TYPE’ node is not available in the HDF5 file a manual object type can be set - optional
  • **qwargs – optional parameters [‘job_name’, ‘project’] - to specify the location of the HDF5 path
Returns:

pyiron object

Return type:

GenericJob

working_directory

working directory of the job is executed in - outside the HDF5 file

Returns:working directory
Return type:str