pyiron.table.datamining module

class pyiron.table.datamining.FunctionContainer[source]

Bases: object

class pyiron.table.datamining.JobFilters[source]

Bases: object

static job_name_contains(job_name_segment)[source]
static job_type(job_type)[source]
class pyiron.table.datamining.PyironTable(project, name=None)[source]

Bases: object

col_to_value(col_name)[source]
convert_dict(input_dict)[source]
create_table(enforce_update=False, level=3, file=None, job_status_list=None)[source]
filter
filter_function
from_hdf()[source]
get_dataframe()[source]
list_groups()[source]
list_nodes()[source]
load(name=None)[source]
name
refill_dict(diff_dict_lst)[source]
save(name=None)[source]
static str_to_value(input_val)[source]
to_hdf()[source]
static total_lst_of_keys(diff_dict_lst)[source]
class pyiron.table.datamining.TableJob(project, job_name)[source]

Bases: pyiron.base.job.generic.GenericJob

add
analysis_project
convert_to_object
enforce_update
filter
filter_function
from_hdf(hdf=None, group_name=None)[source]

Restore pyiron table job from HDF5

Parameters:
  • hdf
  • group_name
get_dataframe()[source]
Returns:pandas.Dataframe
project_level
pyiron_table
ref_project
run_static()[source]

The run static function is called by run to execute the simulation.

to_hdf(hdf=None, group_name=None)[source]

Store pyiron table job in HDF5

Parameters:
  • hdf
  • group_name
update_table(job_status_list=None)[source]

Update the pyiron table object, add new columns if a new function was added or add new rows for new jobs

Parameters:job_status_list (list/None) – List of job status which are added to the table by default [“finished”]
validate_ready_to_run()[source]

Validate that the calculation is ready to be executed. By default no generic checks are performed, but one could check that the input information is complete or validate the consistency of the input at this point.

write_input()[source]

Write the input files for the external executable. This method has to be implemented in the individual hamiltonians.