Generating Benchmark Files and Serpent Scripts¶
It is really important to validate trained models using criticality benchmarks. NucML provides a couple of scripts to aid automate this tedious process.
[3]:
# Prototype
import sys
# This allows us to import the nucml utilities
sys.path.append("..")
[4]:
import pandas as pd
import os
import logging
logger = logging.getLogger()
logger.setLevel(logging.CRITICAL)
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 50)
pd.options.mode.chained_assignment = None # default='warn'
import nucml.datasets as nuc_data
import nucml.ace.data_utilities as ace_utils
import nucml.model.utilities as model_utils
[5]:
figure_dir = "Figures/"
Loading Datasets¶
In our work, several models were trained with Datasets 0-4. Since the models will be used to query data at the original ACE’s energy grid, we need to load the original data to expand the energy grid for the isotopes of interest (among other processes).
[6]:
# LOADING DATASET
df_b0, _, _, _, _, to_scale_b0, _ = nuc_data.load_exfor(pedro=True, basic=0, normalize=False)
df_b1, _, _, _, _, to_scale_b1, _ = nuc_data.load_exfor(pedro=True, basic=1, normalize=False)
df_b2, _, _, _, _, to_scale_b2, _ = nuc_data.load_exfor(pedro=True, basic=2, normalize=False)
df_b3, _, _, _, _, to_scale_b3, _ = nuc_data.load_exfor(pedro=True, basic=3, normalize=False)
df_b4, _, _, _, _, to_scale_b4, _ = nuc_data.load_exfor(pedro=True, basic=4, normalize=False)
Loading Decision Tree Results¶
NucML
will create a directory per model and create subdirectories for every criticality benchmark case. Therefore, we need to specify the directories where the model directories and subdirectories will be stored.
[9]:
dt_ml_ace_dir_b0 = "ml/DT_B0/"
dt_ml_ace_dir_b1 = "ml/DT_B1/"
dt_ml_ace_dir_b2 = "ml/DT_B2/"
dt_ml_ace_dir_b3 = "ml/DT_B3/"
dt_ml_ace_dir_b4 = "ml/DT_B4/"
Having defined the directories, we can read in the training results. In this example, I read the samples provided with the repository.
[6]:
# read in the training results
results_b0 = pd.read_csv("../ML_EXFOR_neutrons/2_DT/dt_resultsB0.csv").sort_values(by="max_depth")
results_b1 = pd.read_csv("../ML_EXFOR_neutrons/2_DT/dt_resultsB1.csv").sort_values(by="max_depth")
results_b2 = pd.read_csv("../ML_EXFOR_neutrons/2_DT/dt_resultsB2.csv").sort_values(by="max_depth")
results_b3 = pd.read_csv("../ML_EXFOR_neutrons/2_DT/dt_resultsB3.csv").sort_values(by="max_depth")
results_b4 = pd.read_csv("../ML_EXFOR_neutrons/2_DT/dt_resultsB4.csv").sort_values(by="max_depth")
results_b0 = results_b0[results_b0.normalizer == "none"]
Let us take a look at the columns included in the results:
[7]:
results_b0.columns
[7]:
Index(['id', 'max_depth', 'mss', 'msl', 'mt_strategy', 'normalizer',
'train_mae', 'train_mse', 'train_evs', 'train_mae_m', 'train_r2',
'val_mae', 'val_mse', 'val_evs', 'val_mae_m', 'val_r2', 'test_mae',
'test_mse', 'test_evs', 'test_mae_m', 'test_r2', 'model_path',
'training_time', 'scaler_path'],
dtype='object')
Notice that there are a lot of performance metrics available. You can include more information in your own results files. THE ONLY CONDITION IS THAT THE RESULTING DATAFRAME CONTAINS THE FOLLOWING COLUMNS:
model_path
scaler_path
[10]:
results_b0[['model_path', 'scaler_path']].head()
[10]:
model_path | scaler_path | |
---|---|---|
52 | E:\ML_Models_EXFOR\DT_B0\DT60_MSS15_MSL10_none... | E:\ML_Models_EXFOR\DT_B0\DT60_MSS15_MSL10_none... |
47 | E:\ML_Models_EXFOR\DT_B0\DT60_MSS15_MSL3_none_... | E:\ML_Models_EXFOR\DT_B0\DT60_MSS15_MSL3_none_... |
45 | E:\ML_Models_EXFOR\DT_B0\DT60_MSS15_MSL1_none_... | E:\ML_Models_EXFOR\DT_B0\DT60_MSS15_MSL1_none_... |
43 | E:\ML_Models_EXFOR\DT_B0\DT60_MSS10_MSL7_none_... | E:\ML_Models_EXFOR\DT_B0\DT60_MSS10_MSL7_none_... |
41 | E:\ML_Models_EXFOR\DT_B0\DT60_MSS10_MSL5_none_... | E:\ML_Models_EXFOR\DT_B0\DT60_MSS10_MSL5_none_... |
Let us extract a single path:
[18]:
example_filename = results_b0.model_path.values[0]
example_filename
[18]:
'E:\\ML_Models_EXFOR\\DT_B0\\DT60_MSS15_MSL10_none_one_hot_B0_v1\\DT60_MSS15_MSL10_none_one_hot_B0_v1.joblib'
NucML uses the following convention to extract the model’s name:
[21]:
example_basename = os.path.basename(example_filename)
print("First, extract the model filename: ", example_basename)
First, extract the model filename: DT60_MSS15_MSL10_none_one_hot_B0_v1.joblib
[22]:
print("Then we split the filename to remove the file extension: ", example_basename.split(".")[0])
Then we split the filename to remove the file extension: DT60_MSS15_MSL10_none_one_hot_B0_v1
It is the DT60_MSS15_MSL10_none_one_hot_B0_v1
name that will be use to create a directory.
Generating Benchmark Files¶
The next step is to select the benchmark of interest. For information on the benchmarks available and instructions on how to including your own, please read the ML_Nuclear_Data/Benchmarks/inputs/README.md
file. The included benchmarks are formatted in a specific way for NucML
to read them.
In this case, we select the U233_MET_FAST_001
(U-233 Jezebel Criticality Benchmark). When configuring NucML, the path to the benchmark folder is automatically saved, and therefore, only the name needs to be specified.
Only those isotopes with a composition higher than 10% per benchmark component are replaced with ML-generated cross sections. There are a lot of assumptions going on in the backend concerning how unitarity is enforced. More information can be found in my Thesis. It is by no means the best nor the worst. It was created as proof-of-concept work. The generate_bench_ml_xs
is a function that should be lab-specific. In other words, you should create your own processing step if possible.
[23]:
BENCHMARK_NAME = "U233_MET_FAST_001"
[ ]:
ace_utils.generate_bench_ml_xs(df_b0, results_b0, BENCHMARK_NAME, to_scale_b0, dt_ml_ace_dir_b0, reset=True)
ace_utils.generate_bench_ml_xs(df_b1, results_b1, BENCHMARK_NAME, to_scale_b1, dt_ml_ace_dir_b1, reset=True)
ace_utils.generate_bench_ml_xs(df_b2, results_b2, BENCHMARK_NAME, to_scale_b2, dt_ml_ace_dir_b2, reset=True)
ace_utils.generate_bench_ml_xs(df_b3, results_b3, BENCHMARK_NAME, to_scale_b3, dt_ml_ace_dir_b3, reset=True)
ace_utils.generate_bench_ml_xs(df_b4, results_b4, BENCHMARK_NAME, to_scale_b4, dt_ml_ace_dir_b4, reset=True)
Let us see what directories where created:
[29]:
os.listdir("ml/")[:5]
[29]:
['DT_B0', 'DT_B1', 'DT_B2', 'DT_B3', 'DT_B4']
Indeed, all four Decision Tree directories were created successfully (other models are shown due to previous work). Let us peak at the content of the first and subsequent directories:
[31]:
os.listdir("ml/DT_B0/")[:5]
[31]:
['DT100_MSS10_MSL1_none_one_hot_B0_v1',
'DT100_MSS10_MSL3_none_one_hot_B0_v1',
'DT100_MSS10_MSL5_none_one_hot_B0_v1',
'DT100_MSS10_MSL7_none_one_hot_B0_v1',
'DT100_MSS15_MSL1_none_one_hot_B0_v1']
[32]:
os.listdir("ml/DT_B0/DT100_MSS10_MSL1_none_one_hot_B0_v1/")
[32]:
['U233_MET_FAST_001', 'U233_MET_FAST_002_001', 'U233_MET_FAST_002_002']
[33]:
os.listdir("ml/DT_B0/DT100_MSS10_MSL1_none_one_hot_B0_v1/U233_MET_FAST_001/")
[33]:
['acelib',
'converter.m',
'input',
'input.out',
'input.seed',
'input_res.m',
'ml_xs_csv',
'results.mat',
'sss_endfb7u.xsdata']
Generating SERPENT Bash Script¶
This is a completely experimental feature. You can pass in the directory for which you want NucML to scan and generate a single bash script to run all cases.
[53]:
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b0, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b1, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b2, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b3, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b4, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
[35]:
with open("ml/DT_B0/U233_MET_FAST_001.sh") as myfile:
head = [next(myfile) for x in range(10)]
print(head)
['cd /mnt/c/Users/Pedro/Desktop/ML_Nuclear_Data/Benchmarks/ml/DT_B0/DT100_MSS10_MSL1_none_one_hot_B0_v1/U233_MET_FAST_001/\n', 'sss2 -omp 10 input\n', '/mnt/c/Program\\ Files/MATLAB/R2019a/bin/matlab.exe -nodisplay -nosplash -nodesktop -r "run(\'converter.m\');exit;" \n', 'cd /mnt/c/Users/Pedro/Desktop/ML_Nuclear_Data/Benchmarks/ml/DT_B0/DT100_MSS10_MSL3_none_one_hot_B0_v1/U233_MET_FAST_001/\n', 'sss2 -omp 10 input\n', '/mnt/c/Program\\ Files/MATLAB/R2019a/bin/matlab.exe -nodisplay -nosplash -nodesktop -r "run(\'converter.m\');exit;" \n', 'cd /mnt/c/Users/Pedro/Desktop/ML_Nuclear_Data/Benchmarks/ml/DT_B0/DT100_MSS10_MSL5_none_one_hot_B0_v1/U233_MET_FAST_001/\n', 'sss2 -omp 10 input\n', '/mnt/c/Program\\ Files/MATLAB/R2019a/bin/matlab.exe -nodisplay -nosplash -nodesktop -r "run(\'converter.m\');exit;" \n', 'cd /mnt/c/Users/Pedro/Desktop/ML_Nuclear_Data/Benchmarks/ml/DT_B0/DT100_MSS10_MSL7_none_one_hot_B0_v1/U233_MET_FAST_001/\n']
[37]:
with open("ml/DT_B0/U233_MET_FAST_001.sh", "r") as file: # the a opens it in append mode
for i in range(10):
line = next(file)
print(line)
cd /mnt/c/Users/Pedro/Desktop/ML_Nuclear_Data/Benchmarks/ml/DT_B0/DT100_MSS10_MSL1_none_one_hot_B0_v1/U233_MET_FAST_001/
sss2 -omp 10 input
/mnt/c/Program\ Files/MATLAB/R2019a/bin/matlab.exe -nodisplay -nosplash -nodesktop -r "run('converter.m');exit;"
cd /mnt/c/Users/Pedro/Desktop/ML_Nuclear_Data/Benchmarks/ml/DT_B0/DT100_MSS10_MSL3_none_one_hot_B0_v1/U233_MET_FAST_001/
sss2 -omp 10 input
/mnt/c/Program\ Files/MATLAB/R2019a/bin/matlab.exe -nodisplay -nosplash -nodesktop -r "run('converter.m');exit;"
cd /mnt/c/Users/Pedro/Desktop/ML_Nuclear_Data/Benchmarks/ml/DT_B0/DT100_MSS10_MSL5_none_one_hot_B0_v1/U233_MET_FAST_001/
sss2 -omp 10 input
/mnt/c/Program\ Files/MATLAB/R2019a/bin/matlab.exe -nodisplay -nosplash -nodesktop -r "run('converter.m');exit;"
cd /mnt/c/Users/Pedro/Desktop/ML_Nuclear_Data/Benchmarks/ml/DT_B0/DT100_MSS10_MSL7_none_one_hot_B0_v1/U233_MET_FAST_001/
Notice that the full path to the Matlab executable is defined here. You will probably need to change it by either writing a script or simply using “Replace All” in any code editor. Matlab is used to convert the serpent output into a .mat
file. This helps analytic tools read easily detector information.
The next step is to simply run the script and you are Done! See the next notebook for information on how to gather and analyze the results.
——————————————————— PRIVATE SECTION¶
Same material for other benchmarks.
[48]:
BENCHMARK_NAME = "U233_MET_FAST_002_001"
[45]:
ace_utils.generate_bench_ml_xs(df_b0, results_b0, BENCHMARK_NAME, to_scale_b0, dt_ml_ace_dir_b0, reset=True)
ace_utils.generate_bench_ml_xs(df_b1, results_b1, BENCHMARK_NAME, to_scale_b1, dt_ml_ace_dir_b1, reset=True)
ace_utils.generate_bench_ml_xs(df_b2, results_b2, BENCHMARK_NAME, to_scale_b2, dt_ml_ace_dir_b2, reset=True)
ace_utils.generate_bench_ml_xs(df_b3, results_b3, BENCHMARK_NAME, to_scale_b3, dt_ml_ace_dir_b3, reset=True)
ace_utils.generate_bench_ml_xs(df_b4, results_b4, BENCHMARK_NAME, to_scale_b4, dt_ml_ace_dir_b4, reset=True)
[49]:
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b0, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b1, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b2, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b3, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b4, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
[50]:
BENCHMARK_NAME = "U233_MET_FAST_002_002"
[46]:
ace_utils.generate_bench_ml_xs(df_b0, results_b0, BENCHMARK_NAME, to_scale_b0, dt_ml_ace_dir_b0, reset=True)
ace_utils.generate_bench_ml_xs(df_b1, results_b1, BENCHMARK_NAME, to_scale_b1, dt_ml_ace_dir_b1, reset=True)
ace_utils.generate_bench_ml_xs(df_b2, results_b2, BENCHMARK_NAME, to_scale_b2, dt_ml_ace_dir_b2, reset=True)
ace_utils.generate_bench_ml_xs(df_b3, results_b3, BENCHMARK_NAME, to_scale_b3, dt_ml_ace_dir_b3, reset=True)
ace_utils.generate_bench_ml_xs(df_b4, results_b4, BENCHMARK_NAME, to_scale_b4, dt_ml_ace_dir_b4, reset=True)
[51]:
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b0, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b1, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b2, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b3, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(dt_ml_ace_dir_b4, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
K-Nearest-Neighbors¶
[112]:
knn_ml_ace_dir_b0 = "ml/KNN_B0/"
knn_ml_ace_dir_b1 = "ml/KNN_B1/"
knn_ml_ace_dir_b2 = "ml/KNN_B2/"
knn_ml_ace_dir_b3 = "ml/KNN_B3/"
knn_ml_ace_dir_b4 = "ml/KNN_B4/"
[113]:
# results_b0 = pd.read_csv("../ML_EXFOR_neutrons/1_KNN/knn_results_B0.csv").sort_values(by="id")
results_b1 = pd.read_csv("../ML_EXFOR_neutrons/1_KNN/knn_results_B1.csv").sort_values(by="id")
results_b2 = pd.read_csv("../ML_EXFOR_neutrons/1_KNN/knn_results_B2.csv").sort_values(by="id")
results_b3 = pd.read_csv("../ML_EXFOR_neutrons/1_KNN/knn_results_B3.csv").sort_values(by="id")
results_b4 = pd.read_csv("../ML_EXFOR_neutrons/1_KNN/knn_results_B4.csv").sort_values(by="id")
# results_b0["scale_energy"] = results_b0.run_name.apply(lambda x: True if "v2" in x else False)
# results_b0["Model"] = results_b0.model_path.apply(lambda x: os.path.basename(os.path.dirname(x)))
# results_b0 = results_b0[results_b0.normalizer == "minmax"]
# results_b0 = results_b0[results_b0.scale_energy == True]
# results_b0 = results_b0[results_b0.distance_metric == 'manhattan']
[114]:
BENCHMARK_NAME = "U233_MET_FAST_001"
[69]:
# ace_utils.generate_bench_ml_xs(df_b0, results_b0, BENCHMARK_NAME, to_scale_b0, knn_ml_ace_dir_b0, reset=True)
ace_utils.generate_bench_ml_xs(df_b1, results_b1, BENCHMARK_NAME, to_scale_b1, knn_ml_ace_dir_b1, reset=True)
ace_utils.generate_bench_ml_xs(df_b2, results_b2, BENCHMARK_NAME, to_scale_b2, knn_ml_ace_dir_b2, reset=True)
ace_utils.generate_bench_ml_xs(df_b3, results_b3, BENCHMARK_NAME, to_scale_b3, knn_ml_ace_dir_b3, reset=True)
ace_utils.generate_bench_ml_xs(df_b4, results_b4, BENCHMARK_NAME, to_scale_b4, knn_ml_ace_dir_b4, reset=True)
[71]:
# ace_utils.generate_serpent_bash(knn_ml_ace_dir_b0, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(knn_ml_ace_dir_b1, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(knn_ml_ace_dir_b2, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(knn_ml_ace_dir_b3, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(knn_ml_ace_dir_b4, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
[115]:
BENCHMARK_NAME = "U233_MET_FAST_002_001"
[116]:
# ace_utils.generate_bench_ml_xs(df_b0, results_b0, BENCHMARK_NAME, to_scale_b0, knn_ml_ace_dir_b0, reset=True)
ace_utils.generate_bench_ml_xs(df_b1, results_b1[3:], BENCHMARK_NAME, to_scale_b1, knn_ml_ace_dir_b1, reset=True)
# ace_utils.generate_bench_ml_xs(df_b2, results_b2, BENCHMARK_NAME, to_scale_b2, knn_ml_ace_dir_b2, reset=True)
# ace_utils.generate_bench_ml_xs(df_b3, results_b3, BENCHMARK_NAME, to_scale_b3, knn_ml_ace_dir_b3, reset=True)
# ace_utils.generate_bench_ml_xs(df_b4, results_b4, BENCHMARK_NAME, to_scale_b4, knn_ml_ace_dir_b4, reset=True)
[117]:
# ace_utils.generate_serpent_bash(knn_ml_ace_dir_b0, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(knn_ml_ace_dir_b1, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(knn_ml_ace_dir_b2, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(knn_ml_ace_dir_b3, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(knn_ml_ace_dir_b4, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
[118]:
BENCHMARK_NAME = "U233_MET_FAST_002_002"
[119]:
# ace_utils.generate_bench_ml_xs(df_b0, results_b0, BENCHMARK_NAME, to_scale_b0, knn_ml_ace_dir_b0, reset=True)
ace_utils.generate_bench_ml_xs(df_b1, results_b1, BENCHMARK_NAME, to_scale_b1, knn_ml_ace_dir_b1, reset=True)
# ace_utils.generate_bench_ml_xs(df_b2, results_b2, BENCHMARK_NAME, to_scale_b2, knn_ml_ace_dir_b2, reset=True)
# ace_utils.generate_bench_ml_xs(df_b3, results_b3, BENCHMARK_NAME, to_scale_b3, knn_ml_ace_dir_b3, reset=True)
# ace_utils.generate_bench_ml_xs(df_b4, results_b4, BENCHMARK_NAME, to_scale_b4, knn_ml_ace_dir_b4, reset=True)
[120]:
# ace_utils.generate_serpent_bash(knn_ml_ace_dir_b0, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(knn_ml_ace_dir_b1, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(knn_ml_ace_dir_b2, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(knn_ml_ace_dir_b3, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(knn_ml_ace_dir_b4, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
XGBoost¶
[121]:
xgb_ml_ace_dir_b0 = "ml/XGB_B0/"
xgb_ml_ace_dir_b1 = "ml/XGB_B1/"
xgb_ml_ace_dir_b2 = "ml/XGB_B2/"
xgb_ml_ace_dir_b3 = "ml/XGB_B3/"
xgb_ml_ace_dir_b4 = "ml/XGB_B4/"
[122]:
results_b0 = pd.read_csv("../ML_EXFOR_neutrons/3_XGB/xgb_resultsB0.csv")
results_b1 = pd.read_csv("../ML_EXFOR_neutrons/3_XGB/xgb_resultsB1.csv")
results_b2 = pd.read_csv("../ML_EXFOR_neutrons/3_XGB/xgb_resultsB2.csv")
results_b3 = pd.read_csv("../ML_EXFOR_neutrons/3_XGB/xgb_resultsB3.csv")
results_b4 = pd.read_csv("../ML_EXFOR_neutrons/3_XGB/xgb_resultsB4.csv")
[123]:
BENCHMARK_NAME = "U233_MET_FAST_001"
[124]:
ace_utils.generate_bench_ml_xs(df_b0, results_b0, BENCHMARK_NAME, to_scale_b0, xgb_ml_ace_dir_b0, reset=True)
# ace_utils.generate_bench_ml_xs(df_b1, results_b1, BENCHMARK_NAME, to_scale_b1, xgb_ml_ace_dir_b1, reset=True)
ace_utils.generate_bench_ml_xs(df_b2, results_b2, BENCHMARK_NAME, to_scale_b2, xgb_ml_ace_dir_b2, reset=True)
ace_utils.generate_bench_ml_xs(df_b3, results_b3, BENCHMARK_NAME, to_scale_b3, xgb_ml_ace_dir_b3, reset=True)
ace_utils.generate_bench_ml_xs(df_b4, results_b4, BENCHMARK_NAME, to_scale_b4, xgb_ml_ace_dir_b4, reset=True)
[125]:
ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b0, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b1, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b2, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b3, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b4, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
[126]:
BENCHMARK_NAME = "U233_MET_FAST_002_001"
[127]:
# ace_utils.generate_bench_ml_xs(df_b0, results_b0, BENCHMARK_NAME, to_scale_b0, xgb_ml_ace_dir_b0, reset=True)
ace_utils.generate_bench_ml_xs(df_b1, results_b1, BENCHMARK_NAME, to_scale_b1, xgb_ml_ace_dir_b1, reset=True)
# ace_utils.generate_bench_ml_xs(df_b2, results_b2, BENCHMARK_NAME, to_scale_b2, xgb_ml_ace_dir_b2, reset=True)
# ace_utils.generate_bench_ml_xs(df_b3, results_b3, BENCHMARK_NAME, to_scale_b3, xgb_ml_ace_dir_b3, reset=True)
# ace_utils.generate_bench_ml_xs(df_b4, results_b4, BENCHMARK_NAME, to_scale_b4, xgb_ml_ace_dir_b4, reset=True)
[128]:
# ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b0, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b1, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b2, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b3, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b4, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
[129]:
BENCHMARK_NAME = "U233_MET_FAST_002_002"
[130]:
# ace_utils.generate_bench_ml_xs(df_b0, results_b0, BENCHMARK_NAME, to_scale_b0, xgb_ml_ace_dir_b0, reset=True)
ace_utils.generate_bench_ml_xs(df_b1, results_b1, BENCHMARK_NAME, to_scale_b1, xgb_ml_ace_dir_b1, reset=True)
# ace_utils.generate_bench_ml_xs(df_b2, results_b2, BENCHMARK_NAME, to_scale_b2, xgb_ml_ace_dir_b2, reset=True)
# ace_utils.generate_bench_ml_xs(df_b3, results_b3, BENCHMARK_NAME, to_scale_b3, xgb_ml_ace_dir_b3, reset=True)
# ace_utils.generate_bench_ml_xs(df_b4, results_b4, BENCHMARK_NAME, to_scale_b4, xgb_ml_ace_dir_b4, reset=True)
[131]:
# ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b0, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b1, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b2, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b3, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
# ace_utils.generate_serpent_bash(xgb_ml_ace_dir_b4, BENCHMARK_NAME, benchmark=BENCHMARK_NAME)
[11]:
# all_serpent_files = []
# for root, _, files in os.walk("ml/DT_B0"):
# for file in files:
# if "U233_MET_FAST_001_001" in root:
# all_serpent_files.append(os.path.abspath(os.path.join(root, file)))