Next thing:
do a zero-mag correction plot, with existing matched data csv file.
expand the query range
Panstarrs column meaning:
MAST PanSTARRS Search Output Columns
Files:
detectibility_visulization: checked lsst cadence, lsst_pointing.html
files:
Check_mysim_result: survey year determination - ids
basic: get orbit and color - df
Observatory code:
argus: U83
LSST: X05
scp:
scp -r /Users/qifengc/Documents/2_Research.nosync/sorcha_sim_argus/my_sim/neo_orbit.csv [email protected]:/hpc/group/cosmology/qc59/argus
Submitted job 36510511 for range 3000-3999
Submitted job 36510512 for range 4000-4999
Submitted job 36510513 for range 5000-5999
Submitted job 36510562 for range 6000-6999
11-04
sent Maryann files to do orbit fitting (r>150 object, selected to cover a range of number of detections per object)

10-29
LSST pointing with argus cadence alwasys has this error:
It should be the error of LSST has more color than my current "make_argus_pointing", I can either revise the color input for lsst.int.

Revised the ini file to have only r band, and resubmit the job to check:
(base) qc59@dcc-login-02 **/work/qc59 $** vim sorcha_config_sband_lsst_st3.ini
(base) qc59@dcc-login-02 **/work/qc59 $** sbatch --array=0-1 multi_sorcha_lsst_custom_p_st4.sh 96 16
Submitted batch job 38917306
found an error in previous st3 of lsst (turning on fading function):
It seems to not turned on fading function in the init file:
New st3
(base) qc59@dcc-login-02 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_st3.ini -p ./cleaned_synthetic_impactors_0_134999_color.csv --orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_0_134999_st3_2 --ew impactor_run_0_134999_st3_complete_2
Submitted batch job
run failed:
Resubmit
(base) qc59@dcc-login-03 **/work/qc59 $** sbatch sorcha_run.sh -c sorcha_config_st3.ini -p ./cleaned_synthetic_impactors_0_134999_color.csv --orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_0_134999_st3_2 --ew impactor_run_0_134999_st3_complete_2
Submitted batch job 39214146
10-28
running r150 in parallel to double check.
(base) qc59@dcc-login-04 **/work/qc59 $** sbatch --array=0-1 multi_sorcha_argus_r150_st4.sh 96 16
Submitted batch job 38812941
(base) qc59@dcc-login-04 **/work/qc59 $** squeue -u qc59
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
38070609 cosmology run qc59 R 14-04:06:56 1 dcc-cosmology-13
38812941_0 scavenger sorcha qc59 R 0:06 1 dcc-fergusonlab-01
38812941_1 scavenger sorcha qc59 R 0:06 1 dcc-fergusonlab-03
"impactor_run_0_134999_10yr_full_output_test.h5" on my local machine is not a full data?
Making LSST position with argus pointing
(base) qc59@dcc-login-01 /work/qc59 $ sbatch make_argus_pointing.sh
Submitted batch job 38818064
using lsst position with argus pointing for sorcha run
(base) qc59@dcc-login-04 **/work/qc59 $** sbatch --array=0-11 multi_sorcha_lsst_custom_p_st4.sh 96 16
Submitted batch job 38828973
10-26
plot is ready to run on full:
(base) qc59@dcc-login-03 **/work/qc59 $** sbatch run_stage_analysis_full.sh
Submitted batch job 38754444
not full
(base) qc59@dcc-login-03 **/work/qc59 $** sbatch run_stage_analysis.sh
Submitted batch job 38754435
the multiple data reading code is working
(base) qc59@dcc-login-04 **/work/qc59 $** cat logs/st_analysis-38696946.log
[Fri Oct 24 03:39:01 AM EDT 2025] Host: dcc-tunglab-01
analysis for each sorcha stage
[1,531 objs] warning_time_days valid: 1,531
[All input objects] sizes: 17,517
[Stage 1: vignetting, and mag limit] unique detected objects: 1,531
[Stage 2: +randomization] unique detected objects: 11
[Stage 3: +fading function] unique detected objects: 11
[Stage 4: +linking] unique detected objects: 8
double checked on jupyter notebook and the number matches
Processed chunk 0, total unique so far: 39
Processed chunk 1, total unique so far: 43
Processed chunk 2, total unique so far: 44
Processed chunk 3, total unique so far: 45
Processed chunk 4, total unique so far: 46
Processed chunk 5, total unique so far: 47
Processed chunk 6, total unique so far: 48
Processed chunk 7, total unique so far: 48
Processed chunk 8, total unique so far: 48
Processed chunk 9, total unique so far: 48
Processed chunk 10, total unique so far: 82
Processed chunk 11, total unique so far: 90
Processed chunk 12, total unique so far: 91
Processed chunk 13, total unique so far: 95
Processed chunk 14, total unique so far: 95
Processed chunk 15, total unique so far: 95
Processed chunk 16, total unique so far: 95
Processed chunk 17, total unique so far: 95
Processed chunk 18, total unique so far: 95
Processed chunk 19, total unique so far: 125
Processed chunk 20, total unique so far: 134
Processed chunk 21, total unique so far: 141
Processed chunk 22, total unique so far: 142
Processed chunk 23, total unique so far: 142
Processed chunk 24, total unique so far: 143
Processed chunk 25, total unique so far: 143
Processed chunk 26, total unique so far: 143
Processed chunk 27, total unique so far: 143
Processed chunk 28, total unique so far: 143
Processed chunk 29, total unique so far: 175
Processed chunk 30, total unique so far: 185
Processed chunk 31, total unique so far: 187
Processed chunk 32, total unique so far: 189
Processed chunk 33, total unique so far: 190
Processed chunk 34, total unique so far: 190
Processed chunk 35, total unique so far: 191
Processed chunk 36, total unique so far: 191
Processed chunk 37, total unique so far: 191
Processed chunk 38, total unique so far: 225
Processed chunk 39, total unique so far: 232
Processed chunk 40, total unique so far: 236
Processed chunk 41, total unique so far: 238
Processed chunk 42, total unique so far: 239
Processed chunk 43, total unique so far: 239
Processed chunk 44, total unique so far: 239
Processed chunk 45, total unique so far: 239
Processed chunk 46, total unique so far: 239
Processed chunk 47, total unique so far: 239
Processed chunk 48, total unique so far: 267
Processed chunk 49, total unique so far: 280
Processed chunk 50, total unique so far: 281
Processed chunk 51, total unique so far: 284
Processed chunk 52, total unique so far: 285
Processed chunk 53, total unique so far: 287
Processed chunk 54, total unique so far: 287
Processed chunk 55, total unique so far: 287
Processed chunk 56, total unique so far: 287
Processed chunk 57, total unique so far: 287
Processed chunk 58, total unique so far: 316
Processed chunk 59, total unique so far: 325
Processed chunk 60, total unique so far: 329
Processed chunk 61, total unique so far: 332
Processed chunk 62, total unique so far: 332
Processed chunk 63, total unique so far: 332
Processed chunk 64, total unique so far: 333
Processed chunk 65, total unique so far: 333
Processed chunk 66, total unique so far: 333
Processed chunk 67, total unique so far: 334
Processed chunk 68, total unique so far: 361
Processed chunk 69, total unique so far: 373
Processed chunk 70, total unique so far: 377
Processed chunk 71, total unique so far: 379
Processed chunk 72, total unique so far: 380
Processed chunk 73, total unique so far: 380
Processed chunk 74, total unique so far: 381
Processed chunk 75, total unique so far: 382
Processed chunk 76, total unique so far: 382
Processed chunk 77, total unique so far: 382
Processed chunk 78, total unique so far: 421
Processed chunk 79, total unique so far: 427
Processed chunk 80, total unique so far: 428
Processed chunk 81, total unique so far: 428
Processed chunk 82, total unique so far: 428
Processed chunk 83, total unique so far: 428
Processed chunk 84, total unique so far: 429
Processed chunk 85, total unique so far: 429
Processed chunk 86, total unique so far: 429
Processed chunk 87, total unique so far: 429
Processed chunk 88, total unique so far: 429
Processed chunk 89, total unique so far: 429
Processed chunk 90, total unique so far: 462
Processed chunk 91, total unique so far: 470
Processed chunk 92, total unique so far: 472
Processed chunk 93, total unique so far: 473
Processed chunk 94, total unique so far: 475
Processed chunk 95, total unique so far: 475
Processed chunk 96, total unique so far: 475
Processed chunk 97, total unique so far: 477
Processed chunk 98, total unique so far: 477
Processed chunk 99, total unique so far: 477
Processed chunk 100, total unique so far: 511
Processed chunk 101, total unique so far: 522
Processed chunk 102, total unique so far: 524
Processed chunk 103, total unique so far: 525
Processed chunk 104, total unique so far: 525
Processed chunk 105, total unique so far: 525
Processed chunk 106, total unique so far: 525
Processed chunk 107, total unique so far: 525
Processed chunk 108, total unique so far: 525
Processed chunk 109, total unique so far: 525
Processed chunk 110, total unique so far: 556
Processed chunk 111, total unique so far: 565
Processed chunk 112, total unique so far: 569
Processed chunk 113, total unique so far: 571
Processed chunk 114, total unique so far: 572
Processed chunk 115, total unique so far: 572
Processed chunk 116, total unique so far: 572
Processed chunk 117, total unique so far: 572
Processed chunk 118, total unique so far: 572
Processed chunk 119, total unique so far: 572
Processed chunk 120, total unique so far: 573
Processed chunk 121, total unique so far: 608
Processed chunk 122, total unique so far: 612
Processed chunk 123, total unique so far: 615
Processed chunk 124, total unique so far: 618
Processed chunk 125, total unique so far: 619
Processed chunk 126, total unique so far: 619
Processed chunk 127, total unique so far: 620
Processed chunk 128, total unique so far: 620
Processed chunk 129, total unique so far: 620
Processed chunk 130, total unique so far: 621
Processed chunk 131, total unique so far: 658
Processed chunk 132, total unique so far: 663
Processed chunk 133, total unique so far: 666
Processed chunk 134, total unique so far: 668
Processed chunk 135, total unique so far: 668
Processed chunk 136, total unique so far: 668
Processed chunk 137, total unique so far: 669
Processed chunk 138, total unique so far: 669
Processed chunk 139, total unique so far: 669
Processed chunk 140, total unique so far: 669
Processed chunk 141, total unique so far: 669
Processed chunk 142, total unique so far: 698
Processed chunk 143, total unique so far: 710
Processed chunk 144, total unique so far: 712
Processed chunk 145, total unique so far: 713
Processed chunk 146, total unique so far: 714
Processed chunk 147, total unique so far: 716
Processed chunk 148, total unique so far: 716
Processed chunk 149, total unique so far: 716
Processed chunk 150, total unique so far: 716
Processed chunk 151, total unique so far: 716
Processed chunk 152, total unique so far: 739
Processed chunk 153, total unique so far: 756
Processed chunk 154, total unique so far: 759
Processed chunk 155, total unique so far: 762
Processed chunk 156, total unique so far: 764
Processed chunk 157, total unique so far: 764
Processed chunk 158, total unique so far: 764
Processed chunk 159, total unique so far: 764
Processed chunk 160, total unique so far: 775
Processed chunk 161, total unique so far: 799
Processed chunk 162, total unique so far: 806
Processed chunk 163, total unique so far: 808
Processed chunk 164, total unique so far: 810
Processed chunk 165, total unique so far: 811
Processed chunk 166, total unique so far: 811
Processed chunk 167, total unique so far: 812
Processed chunk 168, total unique so far: 812
Processed chunk 169, total unique so far: 812
Processed chunk 170, total unique so far: 812
Processed chunk 171, total unique so far: 845
Processed chunk 172, total unique so far: 853
Processed chunk 173, total unique so far: 857
Processed chunk 174, total unique so far: 859
Processed chunk 175, total unique so far: 859
Processed chunk 176, total unique so far: 859
Processed chunk 177, total unique so far: 860
Processed chunk 178, total unique so far: 860
Processed chunk 179, total unique so far: 860
Processed chunk 180, total unique so far: 860
Processed chunk 181, total unique so far: 890
Processed chunk 182, total unique so far: 897
Processed chunk 183, total unique so far: 901
Processed chunk 184, total unique so far: 904
Processed chunk 185, total unique so far: 905
Processed chunk 186, total unique so far: 907
Processed chunk 187, total unique so far: 907
Processed chunk 188, total unique so far: 908
Processed chunk 189, total unique so far: 908
Processed chunk 190, total unique so far: 921
Processed chunk 191, total unique so far: 945
Processed chunk 192, total unique so far: 951
Processed chunk 193, total unique so far: 951
Processed chunk 194, total unique so far: 952
Processed chunk 195, total unique so far: 952
Processed chunk 196, total unique so far: 955
Processed chunk 197, total unique so far: 956
Processed chunk 198, total unique so far: 956
Processed chunk 199, total unique so far: 986
Processed chunk 200, total unique so far: 996
Processed chunk 201, total unique so far: 999
Processed chunk 202, total unique so far: 1,000
Processed chunk 203, total unique so far: 1,001
Processed chunk 204, total unique so far: 1,001
Processed chunk 205, total unique so far: 1,003
Processed chunk 206, total unique so far: 1,004
Processed chunk 207, total unique so far: 1,004
Processed chunk 208, total unique so far: 1,004
Processed chunk 209, total unique so far: 1,004
Processed chunk 210, total unique so far: 1,030
Processed chunk 211, total unique so far: 1,044
Processed chunk 212, total unique so far: 1,048
Processed chunk 213, total unique so far: 1,051
Processed chunk 214, total unique so far: 1,051
Processed chunk 215, total unique so far: 1,052
Processed chunk 216, total unique so far: 1,052
Processed chunk 217, total unique so far: 1,052
Processed chunk 218, total unique so far: 1,052
Processed chunk 219, total unique so far: 1,052
Processed chunk 220, total unique so far: 1,060
Processed chunk 221, total unique so far: 1,088
Processed chunk 222, total unique so far: 1,094
Processed chunk 223, total unique so far: 1,096
Processed chunk 224, total unique so far: 1,098
Processed chunk 225, total unique so far: 1,098
Processed chunk 226, total unique so far: 1,099
Processed chunk 227, total unique so far: 1,100
Processed chunk 228, total unique so far: 1,100
Processed chunk 229, total unique so far: 1,100
Processed chunk 230, total unique so far: 1,138
Processed chunk 231, total unique so far: 1,145
Processed chunk 232, total unique so far: 1,147
Processed chunk 233, total unique so far: 1,147
Processed chunk 234, total unique so far: 1,147
Processed chunk 235, total unique so far: 1,148
Processed chunk 236, total unique so far: 1,148
Processed chunk 237, total unique so far: 1,148
Processed chunk 238, total unique so far: 1,148
Processed chunk 239, total unique so far: 1,148
Processed chunk 240, total unique so far: 1,181
Processed chunk 241, total unique so far: 1,187
Processed chunk 242, total unique so far: 1,190
Processed chunk 243, total unique so far: 1,192
Processed chunk 244, total unique so far: 1,193
Processed chunk 245, total unique so far: 1,195
Processed chunk 246, total unique so far: 1,195
Processed chunk 247, total unique so far: 1,196
Processed chunk 248, total unique so far: 1,196
Processed chunk 249, total unique so far: 1,225
Processed chunk 250, total unique so far: 1,237
Processed chunk 251, total unique so far: 1,241
Processed chunk 279, total unique so far: 1,378
Processed chunk 280, total unique so far: 1,382
Processed chunk 281, total unique so far: 1,384
Processed chunk 282, total unique so far: 1,384
Processed chunk 283, total unique so far: 1,385
Processed chunk 284, total unique so far: 1,387
Processed chunk 285, total unique so far: 1,387
Processed chunk 286, total unique so far: 1,387
Processed chunk 287, total unique so far: 1,387
Processed chunk 288, total unique so far: 1,387
Processed chunk 289, total unique so far: 1,421
Processed chunk 290, total unique so far: 1,428
Processed chunk 291, total unique so far: 1,431
Processed chunk 292, total unique so far: 1,432
Processed chunk 293, total unique so far: 1,433
Processed chunk 294, total unique so far: 1,433
Processed chunk 295, total unique so far: 1,433
Processed chunk 296, total unique so far: 1,434
Processed chunk 297, total unique so far: 1,434
Processed chunk 298, total unique so far: 1,434
Processed chunk 299, total unique so far: 1,466
Processed chunk 300, total unique so far: 1,475
Processed chunk 301, total unique so far: 1,480
Processed chunk 302, total unique so far: 1,480
Processed chunk 303, total unique so far: 1,480
Processed chunk 304, total unique so far: 1,483
Processed chunk 305, total unique so far: 1,483
Processed chunk 306, total unique so far: 1,483
Processed chunk 307, total unique so far: 1,483
Processed chunk 308, total unique so far: 1,483
Processed chunk 309, total unique so far: 1,483
Processed chunk 310, total unique so far: 1,522
Processed chunk 311, total unique so far: 1,528
Processed chunk 312, total unique so far: 1,529
Processed chunk 313, total unique so far: 1,529
Processed chunk 314, total unique so far: 1,530
Processed chunk 315, total unique so far: 1,530
Processed chunk 316, total unique so far: 1,530
Processed chunk 317, total unique so far: 1,530
Processed chunk 318, total unique so far: 1,531
Processed chunk 319, total unique so far: 1,531
✅ Total unique ObjID = 1,531
10-21
Test on speed: lsst pointing
(base) qc59@dcc-login-05 /work/qc59 $ sbatch --array=5 multi_sorcha_test_speed.sh 48 32
Submitted batch job 38560288
(base) qc59@dcc-login-05 /work/qc59 $ sbatch --array=5 multi_sorcha_test_speed.sh 192 8
Submitted batch job 38560289
(base) qc59@dcc-login-05 /work/qc59 $ sbatch --array=5 multi_sorcha_test_speed.sh 128 12
Submitted batch job 38560290
(base) qc59@dcc-login-05 /work/qc59 $ sbatch --array=5 multi_sorcha_test_speed.sh 96 16
Submitted batch job 38560291
384 4
96 16 takes the least amount of time when trying on lsst pointing
Now submit argus st1 jobs on 96 16 ones for the rest of 6-12
(base) qc59@dcc-login-05 /work/qc59 $ sbatch --array=6-12 multi_sorcha_argus_st1.sh 96 16
Submitted batch job 38573821
38573821_12 scavenger sorcha qc59 PD 0:00 1 (AssocGrpMemLimit)
38573821_6 scavenger sorcha qc59 R 0:50 1 dcc-fergusonlab-05
38573821_7 scavenger sorcha qc59 R 0:50 1 dcc-dunsonlab-01
38573821_8 scavenger sorcha qc59 R 0:50 1 dcc-dunsonlab-02
38573821_9 scavenger sorcha qc59 R 0:50 1 dcc-fergusonlab-01
38573821_10 scavenger sorcha qc59 R 0:50 1 dcc-fergusonlab-02
38573821_11 scavenger sorcha qc59 R 0:50 1 dcc-fergusonlab-03
Also submitted for st2 and st3:
(base) qc59@dcc-login-02 **/work/qc59 $** sbatch --array=0-12 multi_sorcha_argus_st2.sh 96 16
Submitted batch job 38583919
(base) qc59@dcc-login-02 **/work/qc59 $** vim multi_sorcha_argus_st3.sh
(base) qc59@dcc-login-02 **/work/qc59 $** vim multi_sorcha_argus_st3.sh
(base) qc59@dcc-login-02 **/work/qc59 $** sbatch --array=0-12 multi_sorcha_argus_st3.sh 96 16
Submitted batch job 38583942
(base) qc59@dcc-login-02 **/work/qc59 $** squeue -u qc59
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
38070609 cosmology run qc59 R 7-19:22:53 1 dcc-cosmology-13
38583919_[6-12] scavenger sorcha qc59 PD 0:00 1 (Resources)
38583942_[0-12] scavenger sorcha qc59 PD 0:00 1 (Priority)
38583919_0 scavenger sorcha qc59 R 1:35 1 dcc-fergusonlab-05
38583919_1 scavenger sorcha qc59 R 1:35 1 dcc-fergusonlab-01
38583919_2 scavenger sorcha qc59 R 1:35 1 dcc-fergusonlab-02
38583919_3 scavenger sorcha qc59 R 1:35 1 dcc-fergusonlab-03
38583919_4 scavenger sorcha qc59 R 1:35 1 dcc-comp-07
38583919_5 scavenger sorcha qc59 R 1:35 1 dcc-comp-10
thought about the logic of the paper.
10-20
instance 5 failed, but other 3, 4 is successful
poinitng db reading error: possibly due to crowded readin.
now copy .db file per worker:
try it with
(base) qc59@dcc-login-03 **/work/qc59 $** sbatch --array=5 multi_sorcha_argus_st1.sh 48 32
Submitted batch job 38540571
(base) qc59@dcc-login-05 **/work/qc59 $** seff 38540571
Job ID: 38540571
Array Job ID: 38540571_5
Cluster: dcc
User/Group: qc59/dukeusers
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 84
CPU Utilized: 4-18:34:50
CPU Efficiency: 23.96% of 19-22:15:48 core-walltime
Job Wall-clock time: 05:41:37
Memory Utilized: 454.29 GB
Memory Efficiency: 82.60% of 550.00 GB (550.00 GB/node)

(base) qc59@dcc-login-02 **~ $** squeue -u qc59
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
38070609 cosmology run qc59 R 6-08:25:00 1 dcc-cosmology-13
38541617_5 scavenger sorcha qc59 R 44:30 1 dcc-dolbowlab-01
38540571_5 scavenger sorcha qc59 R 3:09:28 1 dcc-chsi-22
38540472_5 scavenger sorcha qc59 R 3:38:28 1 dcc-chsi-19
Try fewer objects per core
(base) qc59@dcc-login-04 **/work/qc59 $** sbatch --array=5 multi_sorcha_argus_st1.sh 1536 1
Submitted batch job 38541892
(base) qc59@dcc-login-05 **/work/qc59 $** seff 38541892
Job ID: 38541892
Array Job ID: 38541892_5
Cluster: dcc
User/Group: qc59/dukeusers
State: FAILED (exit code 1)
Nodes: 1
Cores per node: 84
CPU Utilized: 15:44:20
CPU Efficiency: 1.19% of 55-07:47:00 core-walltime
Job Wall-clock time: 15:48:25
Memory Utilized: 549.99 GB
Memory Efficiency: 100.00% of 550.00 GB (550.00 GB/node)
10-19
2 has been running succesfully
(base) qc59@dcc-login-02 **/work/qc59 $** sbatch --array=2-3 multi_sorcha_argus_st1.sh 48 32
Submitted batch job 38519113
(base) qc59@dcc-login-02 **/work/qc59 $** squeue -u qc59
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
38070609 cosmology run qc59 R 5-08:56:26 1 dcc-cosmology-13
38519113_2 scavenger sorcha qc59 R 0:03 1 dcc-dolbowlab-01
38519113_3 scavenger sorcha qc59 R 0:03 1 dcc-dunsonlab-01
pointing not read succesfully
Now add the copy input flag
(base) qc59@dcc-login-02 **/work/qc59 $** sbatch --array=3-5 multi_sorcha_argus_st1.sh 48 32
Submitted batch job 38531799
(base) qc59@dcc-login-02 **/work/qc59 $** squeue -u qc59
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
38070609 cosmology run qc59 R 5-16:32:31 1 dcc-cosmology-13
38531799_3 scavenger sorcha qc59 R 0:01 1 dcc-chsi-19
38531799_4 scavenger sorcha qc59 R 0:01 1 dcc-dolbowlab-01
38531799_5 scavenger sorcha qc59 R 0:01 1 dcc-dunsonlab-01
10-18
Tried to submit the exact same jobs to see if the other ones are working correctly
38389825_0 scavenger sorcha qc59 R 3:25:30 1 dcc-dolbowlab-01
38389825_1 scavenger sorcha qc59 R 3:25:30 1 dcc-fergusonlab-01
To compare it with:
(base) qc59@dcc-login-04 **/work/qc59 $** ls -lh sorcha_parallel_run_argus_0_134999_night_st1/run_381
run_38110364_0/ run_38118876_1/ run_38128343_4/ run_38130272_7/ run_38133112_10/
run_38111741_11/ run_38118901_2/ run_38128355_5/ run_38131608_8/
run_38118792_0/ run_38128223_3/ run_38129491_6/ run_38133045_9/
New file size:
(base) qc59@dcc-login-05 **/work/qc59 $** ls -lh sorcha_parallel_run_argus_0_134999_night_st1_test2/run_38389825_1
total 62G
drwxr-xr-x. 2 qc59 dukeusers 4.0K Oct 17 12:12 **1**
-rw-r--r--. 1 qc59 dukeusers 62G Oct 17 12:11 output_1.h5
Old file size:
(base) qc59@dcc-login-05 **/work/qc59 $** ls -lh sorcha_parallel_run_argus_0_134999_night_st1/run_38118876_1
total 14G
drwxr-xr-x. 2 qc59 dukeusers 4.0K Oct 14 17:38 **1**
-rw-r--r--. 1 qc59 dukeusers 14G Oct 14 17:38 output_1.h5
Adding the debug of the start and end index of the input objID for each array of jobs.
moving the combined output .h5 files from each runxxx folder to a combined df folder for easier data processing later. And added some print statement to debug - like size of the output files.
Now the new multi_sorcha_run file is called multi_sorcha_run_combine.py
And submitted new running jobs:
(base) qc59@dcc-login-04 **/work/qc59 $** sbatch --array=2-3 multi_sorcha_argus_st1.sh 48 32
Submitted batch job 38441834
10-17
there are a few that has been run very little
Re-submitted:
(base) qc59@dcc-login-03 **/work/qc59 $** sbatch --array=0-0 multi_sorcha_argus_st1.sh 48 32
Submitted batch job 38161096
10-15
Next: combine files from st3
check results from st1,, st4, espcially check if the job has been accomplished.
10-14
processed different stages of LSST objects
do the same for Argus, so I need to run the 10k objects on Argus pointing.
Still need stage 1, 2, 3. turning on the linking false.
First try some small jobs to make sure it works.
- check on the previous running jobs: argus with r150
(base) qc59@dcc-login-04 **/work/qc59 $** vim multi_sorcha.sh
(base) qc59@dcc-login-04 **/work/qc59 $** ls sorcha_parallel_run_argus_single_node
**run_37753478_2** **run_37775565_0** **run_37775583_0** **run_37775722_0** **run_37775745_0**
**run_37753479_0** **run_37775566_0** **run_37775596_0** **run_37775723_0** **run_37775752_0**
**run_37753479_3** **run_37775567_0** **run_37775629_0** **run_37775724_0** **run_37775753_0**
**run_37753480_1** **run_37775568_0** **run_37775638_0** **run_37775725_0**
**run_37775564_0** **run_37775569_0** **run_37775644_0** **run_37775741_0**
(base) qc59@dcc-login-04 **/work/qc59 $** seff 37753478
Job ID: 37753478
Array Job ID: 37753478_2
Cluster: dcc
User/Group: qc59/dukeusers
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 60
CPU Utilized: 4-04:13:05
CPU Efficiency: 46.93% of 8-21:33:00 core-walltime
Job Wall-clock time: 03:33:33
Memory Utilized: 234.51 GB
Memory Efficiency: 50.25% of 466.69 GB (466.69 GB/node)
the full array of objects, 768 obj per job, I need to documnet the night time I have chosen.
rows=768, cores_req=32, norbits=24
sorcha_parallel_run_argus_0_134999_night/
**run_37777666_13** **run_37777669_2** **run_37778309_5** **run_37784739_8** **run_37789480_11**
**run_37777667_0** **run_37778303_3** **run_37784535_6** **run_37789197_9** **run_37792139_12**
**run_37777668_1** **run_37778307_4** **run_37784564_7** **run_37789373_10**
(base) qc59@dcc-login-04 **/work/qc59 $** seff 37777667
Job ID: 37777667
Array Job ID: 37777666_0
Cluster: dcc
User/Group: qc59/dukeusers
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 60
CPU Utilized: 2-01:09:35
CPU Efficiency: 38.11% of 5-09:00:00 core-walltime
Job Wall-clock time: 02:09:00
Memory Utilized: 240.46 GB
Memory Efficiency: 53.44% of 450.00 GB (450.00 GB/node)
I chose to try double what I have before.
rows=768*2 , cores_req=32, norbits=48
Four stages for Argus
Also try it for 4 stages - my numbering is a bit off compared to the old numbers
st1 nothing is on
Trailing loss always on
st2 turn on vignetting, and mag limit - did not turn off saturation limit of 16 mag, rand off (fisnihed)
st3 turned on rand, but fading off (finished)
camera footprint always on - corresponds to lsst st2
st 4 turn on fading function - corresponds to the lsst st3
st3:
(base) qc59@dcc-login-04 **/work/qc59 $** sbatch --array=0-11 multi_sorcha_argus_st3.sh 48 32
Submitted batch job 38085159
38085159_[10-11] scavenger sorcha qc59 PD 0:00 1 (AssocGrpMemLimit)
38085159_9 scavenger sorcha qc59 R 1:17 1 dcc-dunsonlab-01
38085159_7 scavenger sorcha qc59 R 3:44 1 dcc-fergusonlab-05
38085159_0 scavenger sorcha qc59 R 5:57 1 dcc-dolbowlab-01
38085159_3 scavenger sorcha qc59 R 5:57 1 dcc-dunsonlab-02
38085159_4 scavenger sorcha qc59 R 5:57 1 dcc-fergusonlab-01
38085159_5 scavenger sorcha qc59 R 5:57 1 dcc-fergusonlab-02
38085159_6 scavenger sorcha qc59 R 5:57 1 dcc-fergusonlab-03
output:
(base) qc59@dcc-login-01 **/work/qc59/sorcha_parallel_run_argus_0_134999_night_st3 $** ls
**run_38085035_1** **run_38085160_0** **run_38085163_3** **run_38085166_6** **run_38085270_9**
**run_38085036_0** **run_38085161_1** **run_38085164_4** **run_38085205_7** **run_38087102_10**
**run_38085159_11** **run_38085162_2** **run_38085165_5** **run_38085220_8**
st2:
(base) qc59@dcc-login-04 **/work/qc59 $** sbatch --array=0-11 multi_sorcha_argus_st2.sh 48 32
Submitted batch job 38085453
38085159_[10-11] scavenger sorcha qc59 PD 0:00 1 (AssocGrpMemLimit)
next: check the memory usage, ask for more memory usage, and then double check how I labled the stages.
Log files:
outer loop log:
/logs/parallel_sorcha_runxxxx.log
Inner loop log: are in each subfolder of run_xxx_0/0 sorcha_parallel-run
Results are in sorcha_parallel_run_argus_0_134999_night_st2
Some file combination is not working well for st3. Need to manually combine files.
job memory usage: 250 G (st3) or 450G (st 2)
(base) qc59@dcc-login-01 **/work/qc59 $** seff 38085159_4
Job ID: 38085164
Array Job ID: 38085159_4
Cluster: dcc
User/Group: qc59/dukeusers
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 68
CPU Utilized: 4-07:25:12
CPU Efficiency: 43.04% of 10-00:18:16 core-walltime
Job Wall-clock time: 03:32:02
Memory Utilized: 240.79 GB
(base) qc59@dcc-login-01 **/work/qc59 $** seff 38085453_6
Job ID: 38088780
Array Job ID: 38085453_6
Cluster: dcc
User/Group: qc59/dukeusers
State: COMPLETED (exit code 0)
Nodes: 1
Cores per node: 68
CPU Utilized: 4-16:24:36
CPU Efficiency: 29.30% of 15-23:35:44 core-walltime
Job Wall-clock time: 05:38:28
Memory Utilized: 488.43 GB
Memory Efficiency: 88.81% of 550.00 GB (550.00 GB/node)
submitted jobs for st1 and st4
st1
(base) qc59@dcc-login-01 **/work/qc59 $** sbatch --array=0-11 multi_sorcha_argus_st1.sh 48 32
Submitted batch job 38111741
st4
(base) qc59@dcc-login-01 **/work/qc59 $** sbatch --array=0-11 multi_sorcha_argus_st4.sh 48 32
Submitted batch job 38109256
10-11
Finished running, need to proceed to analysis:
impactor_run_0_134999_st1.h5
impactor_run_0_134999_st2.h5
impactor_run_0_134999_st3.h5
10-10
turn on the filters one by one
Trailing loss always on
1 turn on vignetting, and mag limit - did not turn off saturation limit of 16 mag
2 turn on rand
camera footprint always on
3 turn on fading function
4 linking filter. - just turn the drop linking off
1:
(base) qc59@dcc-login-01 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_st1.ini -p ./cleaned_synthetic_impactors_0_134999_color.csv --orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_0_134999_st1 --ew impactor_run_0_134999_st1_complete
Submitted batch job 37874890
2:
(base) qc59@dcc-login-01 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_st2.ini -p ./cleaned_synthetic_impactors_0_134999_color.csv --orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_0_134999_st2 --ew impactor_run_0_134999_st2_complete
Submitted batch job 37874812
3:
(base) qc59@dcc-login-01 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_st3.ini -p ./cleaned_synthetic_impactors_0_134999_color.csv --orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_0_134999_st3 --ew impactor_run_0_134999_st3_complete
Submitted batch job 37874796
make pointing for LSST depth but argus cadence
(base) qc59@dcc-login-01 **/work/qc59 $** sbatch make_argus_pointing.sh
Submitted batch job 37869691
LSST depth:
Key numbers | Rubin Observatory


turn the filters on sequentially, to show how they're correlated?
How to use the same random seed?
Tried to turned on the linking flag, but keep all that's lost by linking
drop_unlinked = False
(base) qc59@dcc-login-05 **/work/qc59 $** sbatch sorcha_run.sh -c sorcha_config_all.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_w_all_linking_doc --ew impactor_run_r_150_w_all_linking_doc_complete
Submitted batch job 37867969
Argus with all but linking
(base) qc59@dcc-login-05 **/work/qc59 $** sbatch sorcha_run.sh -c Argus_circular_approximation.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr.db -o ./ -t impactor_run_r_150_argus_w_all_but_linking --ew impactor_run_r_150_argus_w_all_but_linking_complete
Submitted batch job 37868093
10-09
ids_na_impactor: the one that gets overlooked while generating impactor information.
to_save = {
"ids_na_impactor": ids_na_impactor,
}
pd.to_pickle(to_save, "ids_na_impactor.pkl")
remake distance plot for impactors
Remake the plots, with standalone_visualize_neos.py
running code:
~/Documents/2_Research.nosync/Argus/synthetic_impactors ❯ python standalone_visualize_neos.py \ 13s cosmo_class
--input ../neo_input_1.h5 \
--adjusted ../neo_adjusted_epochs_combined_0_134999.h5 \
--summary ../adjustment_summary_combined_0_134999.csv \
--objects N000040d \
--output ./visualizations \
--adjusted-key table
saved under: synthetic_impactors/visualizations/neo_N000040d_comparison.html

10-08
analyze r50, for the full set
analyze a chunk of the full data, what is the best way to analyze
New full data run for sorcha, with nothing turned on:
/work/qc59/sorcha_parallel_run_argus_0_134999_night
10-07
combine files from sorcha output:
(sorcha) qc59@dcc-login-01 /work/qc59 $ sbatch combine_sorcha_outputs.sh
Submitted batch job 37805940
submitted argus with everythign but linking, with r50 objects:
37805068 common sorcha qc59 R 1:44:06 1 dcc-core-30
Combined file:
sbatch combine_sorcha.sh
(base) qc59@dcc-login-02 **/work/qc59 $** seff 37819507
Job ID: 37819507
Cluster: dcc
User/Group: qc59/dukeusers
State: TIMEOUT (exit code 0)
Cores: 1
CPU Utilized: 03:50:34
CPU Efficiency: 95.88% of 04:00:28 core-walltime
Job Wall-clock time: 04:00:28
Memory Utilized: 500.00 GB
Memory Efficiency: 100.00% of 500.00 GB (500.00 GB/node)
10-06
Submit job for 756 obj per job
sbatch --array=0-13 multi_sorcha_argus.sh 24 32
Output:
OUT=/work/qc59/sorcha_parallel_run_argus_single_node/run_${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID}
Resubmitted the job for argus, r_50, only night pointings
(base) qc59@dcc-login-05 /work/qc59 $ sbatch sorcha_run.sh -c Argus_circular_approximation.ini -p ./cleaned_r_50_color.csv --orbits ./cleaned_r_50_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr_night.db -o ./ -t impactor_run_r_50_10yr_w_nothing_argus_night_test --ew impactor_run_r_50_10yr_w_nothing_argus_night_test_complete
Submitted batch job 37783531
Also submitted one on dcc: 37783620 common sorcha qc59 PD 0:00 1 (Priority)
The ones that needs to check:
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
37777666_[9-13] cosmology sorcha qc59 PD 0:00 1 (AssocGrpMemLimit)
37783531 cosmology sorcha qc59 PD 0:00 1 (AssocGrpMemLimit)
37777666_8 cosmology sorcha qc59 R 2:01:30 1 dcc-cosmology-06
37777666_7 cosmology sorcha qc59 R 2:06:58 1 dcc-cosmology-11
37777666_6 cosmology sorcha qc59 R 2:08:50 1 dcc-cosmology-12
10-05
to run on argus: this is working:
sbatch --array=0-2 multi_sorcha_argus.sh 2 32
Submitted batch job 37753478
Submitted new pointing, for only during the night time
sbatch make_argus_pointing.sh
Submitted batch job 37771771
Shrinked the Argus pointing size by 2.7 times
(base) qc59@dcc-login-03 **/work/qc59/sorcha_prerocess $** sqlite3 argus_observations_10yr.db 'SELECT COUNT(*) FROM observations;'
5258881
(base) qc59@dcc-login-03 **/work/qc59/sorcha_prerocess $** sqlite3 argus_observations_10yr_night.db 'SELECT COUNT(*) FROM observations;'
1944845
Run on night pointings, submit bash to install time, and also a short probe to test which chunk size is the best.
(sorcha) qc59@dcc-cosmology-15 **/work/qc59/sorcha_prerocess $** for n in 128 160 192 224 256 320; do jid=$(sbatch --export=ALL,N_OBJS=$n probe_any.sbatch | awk '{print $4}'); echo "$jid $n" | tee -a probe_map.txt; done
37775402 cosmology sorcha qc59 R 29:17 1 dcc-cosmology-14
run time error:
first: is 256 obj, 32 cpu
second: the old but running one
thrid: 256 obj, 8 cpu
forth: 256 obj, 1 cpu
last: 128 obj, 1 cpu
37775638 cosmology probe-mu qc59 PD 0:00 1 (AssocGrpMemLimit)
37775402 cosmology sorcha qc59 R 1:18:43 1 dcc-cosmology-14
37775629 cosmology probe-mu qc59 R 1:10 1 dcc-cosmology-06
37775583 cosmology probe-mu qc59 R 15:54 1 dcc-cosmology-13
37775644 cosmology probe-mu qc59 PD 0:00 1 (AssocGrpMemLimit)
newly submitted:
(base) qc59@dcc-login-05 **/work/qc59/sorcha_prerocess $** sbatch -p cosmology --exclusive --cpus-per-task=1 --mem=450G \
--export=ALL,N_OBJS=256,CORES=1 probe_multi.sbatch
sbatch -p cosmology --exclusive --cpus-per-task=8 --mem=450G \
--export=ALL,N_OBJS=256,CORES=8 probe_multi.sbatch
sbatch -p cosmology --exclusive --cpus-per-task=32 --mem=450G \
--export=ALL,N_OBJS=256,CORES=32 probe_multi.sbatch
sbatch -p cosmology --exclusive --cpus-per-task=1 --mem=450G \
--export=ALL,N_OBJS=128,CORES=1 probe_multi.sbatch
Submitted batch job 37775722
Submitted batch job 37775723
Submitted batch job 37775724
Submitted batch job 37775725
And a few more:
(base) qc59@dcc-login-05 **/work/qc59/sorcha_prerocess $** sbatch -p cosmology --exclusive --cpus-per-task=8 --mem=450G --export=ALL,N_OBJS=128,CORES=8 probe_multi.sbatch
Submitted batch job 37775741
(base) qc59@dcc-login-05 **/work/qc59/sorcha_prerocess $** sbatch -p cosmology --exclusive --cpus-per-task=1 --mem=450G --export=ALL,N_OBJS=96,CORES=1 probe_multi.sbatch
Submitted batch job 37775745
(base) qc59@dcc-login-05 **/work/qc59/sorcha_prerocess $** sbatch -p cosmology --exclusive --cpus-per-task=1 --mem=450G --export=ALL,N_OBJS=160,CORES=1 probe_multi.sbatch
Submitted batch job 37775752
(base) qc59@dcc-login-05 **/work/qc59/sorcha_prerocess $** sbatch -p cosmology --exclusive --cpus-per-task=1 --mem=450G --export=ALL,N_OBJS=192,CORES=1 probe_multi.sbatch
Submitted batch job 37775753
summary table for these objects:
(base) qc59@dcc-login-05 **~ $** sacct -j 37775402,37775722,37775723,37775724,37775725,37775741,37775745,37775752,37775753 --format=JobID,JobName,Partition,AllocCPUS,ReqMem,State,Elapsed,MaxRSS,AveRSS,NodeList
JobID JobName Partition AllocCPUS ReqMem State Elapsed MaxRSS AveRSS NodeList
------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------------
37775402 sorcha cosmology 60 348432M COMPLETED 02:52:51 dcc-cosmology-+
37775402.ba+ batch 60 COMPLETED 02:52:51 30328K 30328K dcc-cosmology-+
37775402.ex+ extern 60 COMPLETED 02:52:51 256K 256K dcc-cosmology-+
37775402.0 sorcha 60 COMPLETED 02:52:47 127634416K 127634416K dcc-cosmology-+
37775722 probe-mul+ cosmology 60 450G COMPLETED 02:52:57 dcc-cosmology-+
37775722.ba+ batch 60 COMPLETED 02:52:57 29912K 29912K dcc-cosmology-+
37775722.ex+ extern 60 COMPLETED 02:52:57 256K 256K dcc-cosmology-+
37775722.0 python3 1 COMPLETED 02:52:53 128000508K 128000508K dcc-cosmology-+
37775723 probe-mul+ cosmology 60 450G COMPLETED 02:53:46 dcc-cosmology-+
37775723.ba+ batch 60 COMPLETED 02:53:46 29932K 29932K dcc-cosmology-+
37775723.ex+ extern 60 COMPLETED 02:53:46 256K 256K dcc-cosmology-+
37775723.0 python3 8 COMPLETED 02:53:42 128003764K 128003764K dcc-cosmology-+
37775724 probe-mul+ cosmology 60 450G COMPLETED 02:51:55 dcc-cosmology-+
37775724.ba+ batch 60 COMPLETED 02:51:55 29980K 29980K dcc-cosmology-+
37775724.ex+ extern 60 COMPLETED 02:51:55 256K 256K dcc-cosmology-+
37775724.0 python3 32 COMPLETED 02:51:51 128002516K 128002516K dcc-cosmology-+
37775725 probe-mul+ cosmology 60 450G COMPLETED 01:49:03 dcc-cosmology-+
37775725.ba+ batch 60 COMPLETED 01:49:03 29968K 29968K dcc-cosmology-+
37775725.ex+ extern 60 COMPLETED 01:49:03 256K 256K dcc-cosmology-+
37775725.0 python3 1 COMPLETED 01:48:59 62890416K 62890416K dcc-cosmology-+
37775741 probe-mul+ cosmology 60 450G COMPLETED 01:49:50 dcc-cosmology-+
37775741.ba+ batch 60 COMPLETED 01:49:50 30200K 30200K dcc-cosmology-+
37775741.ex+ extern 60 COMPLETED 01:49:51 0 0 dcc-cosmology-+
37775741.0 python3 8 COMPLETED 01:49:45 62890208K 62890208K dcc-cosmology-+
37775745 probe-mul+ cosmology 60 450G COMPLETED 01:33:06 dcc-cosmology-+
37775745.ba+ batch 60 COMPLETED 01:33:06 29932K 29932K dcc-cosmology-+
37775745.ex+ extern 60 COMPLETED 01:33:06 256K 256K dcc-cosmology-+
37775745.0 python3 1 COMPLETED 01:33:01 47317588K 47317588K dcc-cosmology-+
37775752 probe-mul+ cosmology 60 450G COMPLETED 02:05:11 dcc-cosmology-+
37775752.ba+ batch 60 COMPLETED 02:05:11 30228K 30228K dcc-cosmology-+
37775752.ex+ extern 60 COMPLETED 02:05:11 256K 256K dcc-cosmology-+
37775752.0 python3 1 COMPLETED 02:05:07 78195620K 78195620K dcc-cosmology-+
37775753 probe-mul+ cosmology 60 450G COMPLETED 02:18:11 dcc-cosmology-+
37775753.ba+ batch 60 COMPLETED 02:18:11 29932K 29932K dcc-cosmology-+
37775753.ex+ extern 60 COMPLETED 02:18:11 256K 256K dcc-cosmology-+
37775753.0 python3 1 COMPLETED 02:18:06 93124744K 93124744K dcc-cosmology-+
| JobID | N Objs | Cores (CORES) |
CPUs per task | Elapsed | MaxRSS (GB) | Notes |
|---|---|---|---|---|---|---|
| 37775745 | 96 | 1 | 1 | 01:33:01 | 47.3 GB | Small sample, short, clean run |
| 37775752 | 160 | 1 | 1 | 02:05:07 | 78.2 GB | Scales roughly linearly |
| 37775753 | 192 | 1 | 1 | 02:18:06 | 93.1 GB | Consistent trend |
| 37775725 | 256 | 1 | 1 | 01:48:59 | 62.9 GB | Faster than smaller jobs (likely cache effects) |
| 37775741 | 128 | 8 | 8 | 01:49:45 | 62.9 GB | Same mem as 256/1core; runtime ~2× faster than 1 core |
| 37775722 | 256 | 1 | 1 | 02:52:53 | 128.0 GB | Full 256 objs, 1 core baseline |
| 37775723 | 256 | 8 | 8 | 02:53:42 | 128.0 GB | Essentially identical runtime to 1 core—no speedup |
| 37775724 | 256 | 32 | 32 | 02:51:51 | 128.0 GB | No improvement; likely I/O bottleneck |
| 37775402 | (baseline “sorcha”) | — | — | 02:52:47 | 127.6 GB | Same memory footprint |
10-03
run for objects r>50
with every filter:
sbatch sorcha_run_2.sh -c sorcha_config_demo.ini -p ./cleaned_r_50_color.csv --orbits ./cleaned_r_50_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_50_10yr_w_everything --ew impactor_run_r_50_10yr_w_everything_complete
Submitted batch job 37717058
with linking only
sbatch sorcha_run_2.sh -c sorcha_config_demo.ini -p ./r_50_color.csv --orbits r_50_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_50_w_linking
(sorcha) qc59@dcc-login-04 **/work/qc59 $** sbatch sorcha_run_2.sh -c sorcha_config_demo.ini -p ./r_50_color.csv --orbits ./r_50_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_50_10yr_w_linking --ew impactor_run_r_50_10yr_w_linking_complete
Submitted batch job 37667624: agmentation error
Cleaned the nan rows.
(sorcha) qc59@dcc-login-04 **/work/qc59 $** sbatch sorcha_run_2.sh -c sorcha_config_demo.ini -p ./cleaned_r_50_color.csv --orbits ./cleaned_r_50_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_50_10yr_w_linking --ew impactor_run_r_50_10yr_w_linking_complete
Submitted batch job 37668116
succeeded
And with nothing:
(sorcha) qc59@dcc-login-04 **/work/qc59 $** sbatch sorcha_run_2.sh -c sorcha_config_nothing.ini -p ./r_50_color.csv --orbits ./r_50_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_50_10yr_nothing --ew impactor_run_r_50_10yr_nothing_complete
Submitted batch job 37667741
should not work...
New job submitted
(sorcha) qc59@dcc-login-04 **/work/qc59 $** sbatch sorcha_run_2.sh -c sorcha_config_nothing.ini -p ./cleaned_r_50_color.csv --orbits cleaned_r_50_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_50_nothing
Submitted batch job 37668457
not succeeded - is it memory issue?
assigning more memory and try again:
sbatch sorcha_run_2.sh -c sorcha_config_nothing.ini -p ./cleaned_r_50_color.csv --orbits cleaned_r_50_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_50_nothing
Submitted batch job 37697825
And wiht Argus, r>50
sbatch sorcha_run_2.sh -c Argus_circular_approximation.ini -p ./r_50_color.csv --orbits ./r_50_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr.db -o ./ -t impactor_run_r_50_10yr_nothing_argus --ew impactor_run_r_50_10yr_nothing_argus_complete
Submitted batch job 37667941:
should not work.,,,
new job submitted
(sorcha) qc59@dcc-login-04 **/work/qc59 $** sbatch sorcha_run_2.sh -c Argus_circular_approximation.ini -p ./cleaned_r_50_color.csv --orbits ./cleaned_r_50_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr.db -o ./ -t impactor_run_r_50_10yr_nothing_argus --ew impactor_run_r_50_10yr_nothing_argus_complete
Submitted batch job 37668476
potential memory issue:
tried again:
sbatch sorcha_run_2.sh -c Argus_circular_approximation.ini -p ./cleaned_r_50_color.csv --orbits ./cleaned_r_50_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr.db -o ./ -t impactor_run_r_50_10yr_w_nothing_argus --ew impactor_run_r_50_10yr_w_nothing_argus_complete
Submitted batch job 37698131
still not working, try with single node:
sbatch sorcha_run_2.sh -c Argus_circular_approximation.ini -p ./cleaned_r_50_color.csv --orbits ./cleaned_r_50_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr.db -o ./ -t impactor_run_r_50_10yr_w_nothing_argus --ew impactor_run_r_50_10yr_w_nothing_argus_complete
Submitted batch job 37726913
Still will be no memory, now try with
(base) qc59@dcc-login-05 **/work/qc59 $** sbatch --array=0-2 multi_sorcha_argus.sh 2 32
Submitted batch job 37753478
(base) qc59@dcc-login-05 **/work/qc59 $** squeue -u qc59
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
37753478_0 cosmology sorcha qc59 R 0:13 1 dcc-cosmology-12
37753478_1 cosmology sorcha qc59 R 0:13 1 dcc-cosmology-13
37753478_2 cosmology sorcha qc59 R 0:13 1 dcc-cosmology-14
(base) qc59@dcc-login-05 **/work/qc59 $** squeue -u qc59
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
37753478_0 cosmology sorcha qc59 R 1:58:09 1 dcc-cosmology-12
37753478_1 cosmology sorcha qc59 R 1:58:09 1 dcc-cosmology-13
37753478_2 cosmology sorcha qc59 R 1:58:09 1 dcc-cosmology-14
(base) qc59@dcc-login-05 **/work/qc59 $** squeue -u qc59
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
37753478_0 cosmology sorcha qc59 R 1:58:28 1 dcc-cosmology-12
37753478_1 cosmology sorcha qc59 R 1:58:28 1 dcc-cosmology-13
37753478_2 cosmology sorcha qc59 R 1:58:28 1 dcc-cosmology-14
srun -n 1 -c ${SLURM_CPUS_PER_TASK} python3 multi_sorcha_write.py \
--config Argus_circular_approximation.ini \
--input_orbits ./cleaned_r_50_orbit.csv \
--input_physical ./cleaned_r_50_color.csv \
--pointing-db sorcha_prerocess/argus_observations_10yr.db \
--path "$OUT" \
--chunksize $(($1 * $2)) \
--norbits $1 \
--cores $2 \
--instance ${SLURM_ARRAY_TASK_ID} \
--cleanup \
--copy_inputs\
--merge_format h5
Results are in:
(base) qc59@dcc-login-05 **/work/qc59/sorcha_parallel_run_argus_single_node $** ls
**run_37753478_2** **run_37753479_0** **run_37753480_1**
combine files:
python combine_files.py --input-dir ../synthetic_impactors/neo_analysis_output_test --pattern "adjustment_summary_*.csv" --out ../synthetic_impactors/neo_analysis_output_test/adjustment_summary_combined_0_134999.csv --out-format csv --dedupe ObjID --keep last
10-02
Running with the cleaned version of data, with nothing on:
(base) qc59@dcc-login-02 **/work/qc59 $** sbatch sorcha_run.sh -c sorcha_config_nothing.ini -p ./cleaned_synthetic_impactors_0_134999_color.csv --orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_0_134999_w_bright --ew impactor_run_0_134999_w_bright_complete
Submitted batch job 37631169
Running the same (cleaned files) with LSST, just with linking turned on.
(base) qc59@dcc-login-02 **/work/qc59 $** sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./cleaned_synthetic_impactors_0_134999_color.csv --orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_0_134999_w_linking_bright --ew impactor_run_0_134999_w_linking_bright_complete
Submitted batch job 37630647
Done jobs:
impactor_run_0_134999_w_linking_bright-2025-10-02-12-10-11-p2713556-sorcha.err
impactor_run_0_134999_w_linking_bright-2025-10-02-12-10-11-p2713556-sorcha.log
impactor_run_0_134999_w_linking_bright_complete.csv
impactor_run_0_134999_w_linking_bright.h5
And maybe one with nothing turned on: done, with parallel running
Without linking on, argus on all objects
sbatch --array=0-2 multi_sorcha.sh 156 32
OUT=/work/qc59/sorcha_parallel_run/run_${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID}
mkdir -p "$OUT"
srun -n 1 -c ${SLURM_CPUS_PER_TASK} python3 multi_sorcha_write.py \
--config Argus_circular_approximation.ini \
--input_orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv \
--input_physical ./cleaned_synthetic_impactors_0_134999_color.csv \
--pointing-db sorcha_prerocess/argus_observations_10yr.db \
--path "$OUT" \
--chunksize $(($1 * $2)) \
--norbits $1 \
--cores $2 \
--instance ${SLURM_ARRAY_TASK_ID} \
--cleanup \
--copy_inputs\
--merge_format h5
# optionally add: --assist_cache /hpc/home/qc59/.cache/sorcha
# optionally add: --stats neo_stats
submited jobs:
37639514_2 cosmology sorcha qc59 R 1:03:14 1 dcc-cosmology-10
37639514_0 cosmology sorcha qc59 R 5:13:14 1 dcc-cosmology-01
37639514_1 cosmology sorcha qc59 R 5:13:14 1 dcc-cosmology-02
Retry, with cosmology partition, 300 G mem, smaller chunk
(sorcha) qc59@dcc-login-04 **/work/qc59 $** sbatch --array=0-13 multi_sorcha.sh 32 32
Submitted batch job 37698940
#!/bin/bash
#SBATCH --job-name=sorcha
#SBATCH --partition=cosmology
#SBATCH --nodes=1
#SBATCH --ntasks=1 # 1 Python process that uses multiprocessing
#SBATCH --cpus-per-task=32 # match the node's 76 CPUs
#SBATCH --mem=300G # a safe request below 348432 MB
#SBATCH --time=24:00:00
#SBATCH --output=./logs/parallel_sorcha-argus-%J.log
# Run from the directory where you called sbatch
cd "$SLURM_SUBMIT_DIR"
# --- Conda setup ---
# If your cluster uses modules, you may need: module load anaconda (or similar)
source "$(conda info --base)/etc/profile.d/conda.sh"
conda activate sorcha
OUT=/work/qc59/sorcha_parallel_run_argus/run_${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID}
mkdir -p "$OUT"
srun -n 1 -c ${SLURM_CPUS_PER_TASK} python3 multi_sorcha_write.py \
--config Argus_circular_approximation.ini \
--input_orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv \
--input_physical ./cleaned_synthetic_impactors_0_134999_color.csv \
--pointing-db sorcha_prerocess/argus_observations_10yr.db \
--path "$OUT" \
--chunksize $(($1 * $2)) \
--norbits $1 \
--cores $2 \
--instance ${SLURM_ARRAY_TASK_ID} \
--cleanup \
--copy_inputs\
--merge_format h5
# optionally add: --assist_cache /hpc/home/qc59/.cache/sorcha
# optionally add: --stats neo_stats
This still won't work for all of the jobs. Very likely memory issue.
Also with linking on, tried smaller chunk size, so lower memory requirement.
sbatch --array=0-13 multi_sorcha_argus_filtering.sh 32 32
Tried 1k obj per job, with filtering (linking) for argus
(base) qc59@dcc-login-02 **/work/qc59 $** sbatch --array=0-13 multi_sorcha_argus_filtering.sh 32 32
Submitted batch job 37655849
Killed by lack of memory
Inside the .sh file:
OUT=/work/qc59/sorcha_parallel_run_argus_w_linking/run_${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID}
mkdir -p "$OUT"
srun -n 1 -c ${SLURM_CPUS_PER_TASK} python3 multi_sorcha_write.py \
--config Argus_circular_approximation_filtering.ini \
--input_orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv \
--input_physical ./cleaned_synthetic_impactors_0_134999_color.csv \
--pointing-db sorcha_prerocess/argus_observations_10yr.db \
--path "$OUT" \
--chunksize $(($1 * $2)) \
--norbits $1 \
--cores $2 \
--instance ${SLURM_ARRAY_TASK_ID} \
--cleanup \
--copy_inputs\
--merge_format h5
# optionally add: --assist_cache /hpc/home/qc59/.cache/sorcha
# optionally add: --stats neo_stats
retried on 10/03, with more memory
sbatch --array=0-13 multi_sorcha_argus_filtering.sh 32 32
Submitted batch job 3769995
inside .sh:
#!/bin/bash
#SBATCH --job-name=sorcha
##SBATCH --partition=cosmology
#SBATCH --nodes=1
#SBATCH --ntasks=1 # 1 Python process that uses multiprocessing
#SBATCH --cpus-per-task=32 # match the node's 76 CPUs
#SBATCH --mem=300G # a safe request below 348432 MB
##SBATCH --time=24:00:00
#SBATCH --output=./logs/parallel-sorcha-argus-%J.log
# Run from the directory where you called sbatch
cd "$SLURM_SUBMIT_DIR"
# --- Conda setup ---
# If your cluster uses modules, you may need: module load anaconda (or similar)
source "$(conda info --base)/etc/profile.d/conda.sh"
conda activate sorcha
OUT=/work/qc59/sorcha_parallel_run_argus_w_linking_2/run_${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID}
mkdir -p "$OUT"
srun -n 1 -c ${SLURM_CPUS_PER_TASK} python3 multi_sorcha_write.py \
--config Argus_circular_approximation_filtering.ini \
--input_orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv \
--input_physical ./cleaned_synthetic_impactors_0_134999_color.csv \
--pointing-db sorcha_prerocess/argus_observations_10yr.db \
--path "$OUT" \
--chunksize $(($1 * $2)) \
--norbits $1 \
--cores $2 \
--instance ${SLURM_ARRAY_TASK_ID} \
--cleanup \
--copy_inputs\
--merge_format h5
# optionally add: --assist_cache /hpc/home/qc59/.cache/sorcha
# optionally add: --stats neo_stats
10-01
Sorcha with Argus:
recovered 3 out of the 4 large impactors lost by LSST linking
sbatch sorcha_run.sh -c Argus_circular_approximation.ini -p ./synthetic_impactors/color_orbit_output/Synthetic_Impactors_combined_0_134999_color.csv --orbits ./synthetic_impactors/color_orbit_output/Synthetic_Impactors_combined_0_134999_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr.db -o ./ -t impactor_run_0_134999_10yr_argus_w_rand_fading --ew impactor_run_0_134999_10yr_argus_w_rand_fading_complete
sbatch sorcha_run.sh -c Argus_circular_approximation.ini -p ./synthetic_impactors/color_orbit_output/Synthetic_Impactors_combined_0_134999_color.csv --orbits ./synthetic_impactors/color_orbit_output/Synthetic_Impactors_combined_0_134999_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr.db -o ./ -t impactor_run_0_134999_10yr_argus_w_rand_fading --ew impactor_run_0_134999_10yr_argus_w_rand_fading_complete
srun: error: dcc-cosmology-14: task 0: Exited with exit code 245
Get this new file to clean the input data
clean_input_data.py
python ./sorcha_prerocess/clean_input_data.py -c ./synthetic_impactors/color_orbit_output/Synthetic_Impactors_combined_0_134999_color.csv --orbits ./synthetic_impactors/color_orbit_output/Synthetic_Impactors_combined_0_134999_orbit.csv -o cleaned_synthetic_impactors_0_134999
Not cleaned input orbit and color file: with some nan values.
============================================================
NaN Statistics:
============================================================
Rows with NaN in color file only: 0
Rows with NaN in orbit file only: 93
Rows with NaN in both files: 0
Total rows with NaN (either file): 93
Clean rows (no NaN): 17424
============================================================
Columns with NaN in orbit file:
epochMJD_TDB: 93 NaN values
Saving cleaned files:
cleaned_synthetic_impactors_0_134999_color.csv (17424 rows)
cleaned_synthetic_impactors_0_134999_orbit.csv (17424 rows)
Saving filtered NaN rows:
cleaned_synthetic_impactors_0_134999_color_nan_rows.csv (93 rows)
cleaned_synthetic_impactors_0_134999_orbit_nan_rows.csv (93 rows)
============================================================
Cleaning complete!
============================================================
Original: 17517 objects
Cleaned: 17424 objects
Removed: 93 objects (0.53%)
test on argus, with rand and fading
(sorcha) qc59@dcc-login-02 **/work/qc59 $** sbatch sorcha_run.sh -c Argus_circular_approximation.ini -p ./cleaned_synthetic_impactors_0_134999_color.csv --orbits ./cleaned_synthetic_impactors_0_134999_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr.db -o ./ -t impactor_run_0_134999_10yr_argus_w_rand_fading --ew impactor_run_0_134999_10yr_argus_w_rand_fading_complete
Submitted batch job 37584001
It failed becauase of out of memory. Let me do the parallel thing, wiht multi_sorcha.sh
09-29
Argus
(base) qc59@dcc-login-03 /work/qc59 $ sbatch sorcha_run.sh -c Argus_circular_approximation.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db sorcha_prerocess/argus_observations_10yr.db -o ./ -t impactor_run_r_150_argus_no_anything_rand --ew impactor_run_r_150_argus_no_anything_rand_complete
Submitted batch job 37428677
(base) qc59@dcc-login-03 /work/qc59 $ squeue -u qc59
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
37428677 cosmology sorcha qc59 R 0:29 1 dcc-cosmology-15
to test what is preventing them been seen:
from bottom to top:
Linking - bright limit - fading function - FOV - randomizing astrometry and photometry (SNR - vignetting depth correction, randomizing)
nothing off, the previous log
vim impactor_run_no_detection_150-2025-09-25-10-26-32-p3158294-sorcha.log
everything off
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_no_anything_rand --ew impactor_run_r_150_no_anything_rand_complete
Submitted batch job 37428616
impactor_run_r_150_no_anything_rand.h5
Only with random turned on:
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_w_rand --ew impactor_run_r_150_w_rand_complete
Submitted batch job 37431146
vim impactor_run_r_150_w_rand-2025-09-29-14-05-57-p2389426-sorcha.log
impactor_run_r_150_w_rand_complete.csv
impactor_run_r_150_w_rand.h5
2025-09-29 14:11:03,420 sorcha.sorcha INFO Ephemeris generation completed
2025-09-29 14:11:03,420 sorcha.sorcha INFO Start post processing for this chunk
2025-09-29 14:11:03,420 sorcha.sorcha INFO Matching pointing database information to observations on rough camera footprint
2025-09-29 14:11:03,449 sorcha.sorcha INFO Calculating apparent magnitudes...
2025-09-29 14:11:03,449 sorcha.modules.PPCalculateApparentMagnitude INFO Selecting and applying correct colour offset...
2025-09-29 14:11:03,486 sorcha.modules.PPCalculateApparentMagnitude INFO Calculating apparent magnitude in filter...
2025-09-29 14:11:03,492 sorcha.sorcha INFO Calculating trailing losses...
2025-09-29 14:11:03,494 sorcha.sorcha INFO Vignetting turned OFF in config file. 5-sigma depth of field will be used for subsequent calculations.
2025-09-29 14:11:03,494 sorcha.sorcha INFO Calculating astrometric and photometric uncertainties...
2025-09-29 14:11:03,499 sorcha.sorcha INFO Number of rows BEFORE randomizing astrometry and photometry: 30277
2025-09-29 14:11:03,499 sorcha.modules.PPRandomizeMeasurements INFO Removing all observations with SNR < 2.0...
2025-09-29 14:11:03,511 sorcha.modules.PPRandomizeMeasurements INFO Randomising photometry...
2025-09-29 14:11:03,512 sorcha.utilities.sorchaArguments INFO the rng seed for the sorcha.modules.PPRandomizeMeasurements module is 1125435990
2025-09-29 14:11:03,513 sorcha.modules.PPRandomizeMeasurements INFO Randomizing astrometry...
2025-09-29 14:11:03,517 sorcha.sorcha INFO Number of rows AFTER randomizing astrometry and photometry: 17826
2025-09-29 14:11:03,518 sorcha.sorcha INFO Applying field-of-view filters...
2025-09-29 14:11:03,518 sorcha.sorcha INFO Number of rows BEFORE applying FOV filters: 17826
2025-09-29 14:11:03,518 sorcha.modules.PPApplyFOVFilter INFO Applying sensor footprint filter...
2025-09-29 14:11:03,557 sorcha.sorcha INFO Number of rows AFTER applying FOV filters: 10393
only vignetting depth correction turned on
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_w_depcor --ew impactor_run_r_150_w_depcor_complete
Submitted batch job 37431368
impactor_run_r_150_w_depcor-2025-09-29-14-14-05-p2390060-sorcha.err
impactor_run_r_150_w_depcor-2025-09-29-14-14-05-p2390060-sorcha.log
impactor_run_r_150_w_depcor_complete.csv
impactor_run_r_150_w_depcor.h5
2025-09-29 14:19:08,356 sorcha.sorcha INFO Number of rows BEFORE applying FOV filters: 30277
2025-09-29 14:19:08,356 sorcha.modules.PPApplyFOVFilter INFO Applying sensor footprint filter...
2025-09-29 14:19:08,402 sorcha.sorcha INFO Number of rows AFTER applying FOV filters: 17467
2025-09-29 14:05:57,347 sorcha.utilities.sorchaArguments INFO the base rng seed is 2204404253
2025-09-29 14:11:03,512 sorcha.utilities.sorchaArguments INFO the rng seed for the sorcha.modules.PPRandomizeMeasurements module is 1125435990
only fading function turned on
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_w_fading --ew impactor_run_r_150_w_fading_complete
Submitted batch job 37432980
impactor_run_r_150_w_fading-2025-09-29-14-32-43-p2391475-sorcha.err
impactor_run_r_150_w_fading-2025-09-29-14-32-43-p2391475-sorcha.log
impactor_run_r_150_w_fading_complete.csv
impactor_run_r_150_w_fading.h5
2025-09-29 14:32:43,751 sorcha.utilities.sorchaArguments INFO the base rng seed is 4049589991
2025-09-29 14:37:45,875 sorcha.sorcha INFO Number of rows BEFORE applying FOV filters: 30277
2025-09-29 14:37:45,875 sorcha.modules.PPApplyFOVFilter INFO Applying sensor footprint filter...
2025-09-29 14:37:45,920 sorcha.sorcha INFO Number of rows AFTER applying FOV filters: 17467
2025-09-29 14:37:45,920 sorcha.sorcha INFO Applying detection efficiency fading function...
2025-09-29 14:37:45,920 sorcha.sorcha INFO Number of rows BEFORE applying fading function: 17467
2025-09-29 14:37:45,920 sorcha.modules.PPFadingFunctionFilter INFO Calculating probabilities of detections...
2025-09-29 14:37:45,921 sorcha.modules.PPFadingFunctionFilter INFO Dropping observations below detection threshold...
2025-09-29 14:37:45,921 sorcha.utilities.sorchaArguments INFO the rng seed for the sorcha.modules.PPDropObservations module is 886358178
2025-09-29 14:37:45,923 sorcha.sorcha INFO Number of rows AFTER applying fading function: 7755
only with linking turned on
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_w_linking --ew impactor_run_r_150_w_linking_complete
Submitted batch job 37434245
impactor_run_r_150_w_linking-2025-09-29-14-45-22-p2392470-sorcha.err
impactor_run_r_150_w_linking-2025-09-29-14-45-22-p2392470-sorcha.log
impactor_run_r_150_w_linking_complete.csv
impactor_run_r_150_w_linking.h5
2025-09-29 14:50:26,252 sorcha.sorcha INFO Applying field-of-view filters...
2025-09-29 14:50:26,252 sorcha.sorcha INFO Number of rows BEFORE applying FOV filters: 30277
2025-09-29 14:50:26,252 sorcha.modules.PPApplyFOVFilter INFO Applying sensor footprint filter...
2025-09-29 14:50:26,309 sorcha.sorcha INFO Number of rows AFTER applying FOV filters: 17467
2025-09-29 14:50:26,309 sorcha.sorcha INFO Applying SSP linking filter...
2025-09-29 14:50:26,309 sorcha.sorcha INFO Number of rows BEFORE applying SSP linking filter: 17467
2025-09-29 14:50:26,624 sorcha.sorcha INFO Number of rows AFTER applying SSP linking filter: 17347
only with brightness limit turned on
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_w_bright --ew impactor_run_r_150_w_bright_complete
Submitted batch job 37435413
with everything turned on
(base) qc59@dcc-login-03 **/work/qc59 $** sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_w_everything --ew impactor_run_r_150_w_everything_complete
Submitted batch job 37435280
impactor_run_r_150_w_everything-2025-09-29-15-14-52-p2394345-sorcha.err
impactor_run_r_150_w_everything-2025-09-29-15-14-52-p2394345-sorcha.log
impactor_run_r_150_w_everything_complete.csv
impactor_run_r_150_w_everything.h5
Distribution plot

with fading, linking, and bright
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_w_linking_fading_bright --ew impactor_run_r_150_w_linking_fading_bright_complete
Submitted batch job 37436364
impactor_run_r_150_w_linking_fading_bright-2025-09-29-16-04-27-p2397675-sorcha.err
impactor_run_r_150_w_linking_fading_bright-2025-09-29-16-04-27-p2397675-sorcha.log
impactor_run_r_150_w_linking_fading_bright_complete.csv
impactor_run_r_150_w_linking_fading_bright.h5
with linking, randomizing, and bright
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./r_150_color.csv --orbits ./r_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_r_150_w_linking_rand_bright --ew impactor_run_r_150_w_linking_rand_bright_complete
Submitted batch job 37436578
09-26
With no fading function
(base) qc59@dcc-login-04 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./in_detection_150_color.csv --orbits ./in_detection_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_in_detection_150_no_fading --ew impactor_run_in_detection_150_no_fading_complete -f
Submitted batch job 37333161
(base) qc59@dcc-login-04 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./no_detection_150_color.csv --orbits ./no_detection_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_no_detection_150_no_fading --ew impactor_run_no_detection_150_no_fading_complete -f
Submitted batch job 37333186
still not returning all of them
Now try turning off any other filtering
(base) qc59@dcc-login-04 /work/qc59 $ vim sorcha_config_demo.ini
(base) qc59@dcc-login-04 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./no_detection_150_color.csv --orbits ./no_detection_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_no_detection_150_no_anything --ew impactor_run_no_detection_150_no_anything_complete -f
Submitted batch job 37337323
(base) qc59@dcc-login-04 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./in_detection_150_color.csv --orbits ./in_detection_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_in_detection_150_no_anything --ew impactor_run_in_detection_150_no_anything_complete -f
Submitted batch job 37337358
5 sigma depth
FOV filtering
09-25
The full data returned by sorcha is before randomizing astrometry and photometry

Some randomness in the detection


09-24
submitted job for >150m, but has no detection
(base) qc59@dcc-login-04 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./no_detection_150_color.csv --orbits ./no_detection_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_no_detection_150 --ew impactor_run_no_detection_150
Submitted batch job 37239745
And the one indetection:
(base) qc59@dcc-login-04 /work/qc59 $ sbatch sorcha_run.sh -c sorcha_config_demo.ini -p ./in_detection_150_color.csv --orbits ./in_detection_150_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_in_detection_150 --ew impactor_run_in_detection_150_complete
Submitted batch job 37240020
Todo: compare the results, check their logs to see if they abandomed some unobserved ones.
impact distance plot is working
impactor_distance.ipnb

09-24
Look into the big guys, undetected

This one is detected during the DDF survey

lsst-patches.ipnb
make_schedule_timeline()

09-22
neo_adjusted_epochs_134000_134999.h5: 145/145 rows used
TOTAL rows seen: 17517
09-21
92000 - 95000 seems to be done by previous loops. Should double check.
submitted another 40,000:
(base) qc59@dcc-login-04 /work/qc59 $ ./synthetic_impactors_multi_submit.sh 95000 135000 20000 1000
Submitting parallel NEO analysis jobs:
Global range: 95000 to 135000 (exclusive)
Chunk size: 20000 (per job's assigned range)
Window size: 1000 (save every N NEOs within each job)
Submitted job 36899290 for range 95000-114999 (window=1000)
Submitted job 36899291 for range 115000-134999 (window=1000)
Submitted 2 jobs total.
Monitor: squeue -u qc59
Logs: logs/
0-94999 has been created orbit and color files
(sorcha) qc59@dcc-login-04 /work/qc59 $ sbatch preprocess_orbit_color.sh
Submitted batch job 36901288
- Synthetic_Impactors_combined_0_94999_orbit.csv: 12238 rows
No duplicate ObjIDs across files detected (by filename provenance).
Summary:
neo_adjusted_epochs_0_999.h5: 130/130 rows used
neo_adjusted_epochs_1000_1999.h5: 126/126 rows used
neo_adjusted_epochs_2000_2999.h5: 140/140 rows used
neo_adjusted_epochs_3000_3999.h5: 122/122 rows used
neo_adjusted_epochs_4000_4999.h5: 115/115 rows used
neo_adjusted_epochs_5000_5999.h5: 139/139 rows used
neo_adjusted_epochs_6000_6999.h5: 139/139 rows used
neo_adjusted_epochs_7000_7999.h5: 146/146 rows used
neo_adjusted_epochs_8000_8999.h5: 126/126 rows used
neo_adjusted_epochs_9000_9999.h5: 129/129 rows used
neo_adjusted_epochs_10000_10999.h5: 122/122 rows used
neo_adjusted_epochs_11000_11999.h5: 149/149 rows used
neo_adjusted_epochs_12000_21999.h5: 1322/1322 rows used
neo_adjusted_epochs_22000_31999.h5: 1276/1276 rows used
neo_adjusted_epochs_32000_41999.h5: 1279/1279 rows used
neo_adjusted_epochs_42000_51999.h5: 1305/1305 rows used
neo_adjusted_epochs_52000_52999.h5: 114/114 rows used
neo_adjusted_epochs_53000_53999.h5: 127/127 rows used
neo_adjusted_epochs_54000_54999.h5: 141/141 rows used
neo_adjusted_epochs_55000_55999.h5: 136/136 rows used
neo_adjusted_epochs_56000_56999.h5: 125/125 rows used
neo_adjusted_epochs_57000_57999.h5: 122/122 rows used
neo_adjusted_epochs_58000_58999.h5: 121/121 rows used
neo_adjusted_epochs_59000_59999.h5: 128/128 rows used
neo_adjusted_epochs_60000_60999.h5: 116/116 rows used
neo_adjusted_epochs_61000_61999.h5: 117/117 rows used
neo_adjusted_epochs_62000_62999.h5: 111/111 rows used
neo_adjusted_epochs_63000_63999.h5: 137/137 rows used
neo_adjusted_epochs_64000_64999.h5: 141/141 rows used
neo_adjusted_epochs_65000_65999.h5: 133/133 rows used
neo_adjusted_epochs_66000_66999.h5: 118/118 rows used
neo_adjusted_epochs_67000_67999.h5: 136/136 rows used
neo_adjusted_epochs_68000_68999.h5: 137/137 rows used
neo_adjusted_epochs_69000_69999.h5: 130/130 rows used
neo_adjusted_epochs_70000_70999.h5: 119/119 rows used
neo_adjusted_epochs_71000_71999.h5: 126/126 rows used
neo_adjusted_epochs_72000_72999.h5: 135/135 rows used
neo_adjusted_epochs_73000_73999.h5: 123/123 rows used
neo_adjusted_epochs_74000_74999.h5: 134/134 rows used
neo_adjusted_epochs_75000_75999.h5: 133/133 rows used
neo_adjusted_epochs_76000_76999.h5: 134/134 rows used
neo_adjusted_epochs_77000_77999.h5: 111/111 rows used
neo_adjusted_epochs_78000_78999.h5: 124/124 rows used
neo_adjusted_epochs_79000_79999.h5: 125/125 rows used
neo_adjusted_epochs_80000_80999.h5: 126/126 rows used
neo_adjusted_epochs_81000_81999.h5: 126/126 rows used
neo_adjusted_epochs_82000_82999.h5: 130/130 rows used
neo_adjusted_epochs_83000_83999.h5: 126/126 rows used
neo_adjusted_epochs_84000_84999.h5: 148/148 rows used
neo_adjusted_epochs_85000_85999.h5: 131/131 rows used
neo_adjusted_epochs_86000_86999.h5: 119/119 rows used
neo_adjusted_epochs_87000_87999.h5: 134/134 rows used
neo_adjusted_epochs_88000_88999.h5: 113/113 rows used
neo_adjusted_epochs_89000_89999.h5: 125/125 rows used
neo_adjusted_epochs_90000_90999.h5: 120/120 rows used
neo_adjusted_epochs_91000_91999.h5: 136/136 rows used
neo_adjusted_epochs_92000_92999.h5: 145/145 rows used
neo_adjusted_epochs_93000_93999.h5: 125/125 rows used
neo_adjusted_epochs_94000_94999.h5: 115/115 rows used
TOTAL rows seen: 12238
Note: final combined counts reported above.
and 0-94999 was sent to sorcha pipeline:
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p synthetic_impactors/color_orbit_output/Synthetic_Impactors_combined_0_94999_color.csv --orbits synthetic_impactors/color_orbit_output/Synthetic_Impactors_combined_0_94999_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_0_94999_10yr_full_output_test --ew impactor_run_0_94999_10yr_full_output_test_complete
Submitted batch job 36901620
09-20
(base) qc59@dcc-login-01 /work/qc59 $ ./synthetic_impactors_multi_submit.sh 52000 92000 20000 1000
Submitting parallel NEO analysis jobs:
Global range: 52000 to 92000 (exclusive)
Chunk size: 20000 (per job's assigned range)
Window size: 1000 (save every N NEOs within each job)
Submitted job 36840178 for range 52000-71999 (window=1000)
Submitted job 36840179 for range 72000-91999 (window=1000)
Submitted 2 jobs total.
Monitor: squeue -u qc59
Logs: logs/
09-18
Now:
impactor_run_0_7999_10yr_full_output_test.h5
running on dcc, for complete output with --ew command
sbatch sorcha_run.sh -c sorcha_config_demo.ini -p Synthetic_Impactors_combined_0_7999_color.csv --orbits Synthetic_Impactors_combined_0_7999_orbit.csv --pointing-db baseline_v3.4_10yrs.db -o ./ -t impactor_run_0_7999_10yr_full_output_test2 --ew impactor_run_0_7999_10yr_full_output_test2
Submitted batch job 36589430
Deriving diameter
09-17
With only 0-999 (before combined data file):
cat logs/sorcha-36580443.err
with the combined data file, try to see if it's because the combined file is
some dangerous impactors I created: impact during the time it's very close to the sun

And there are some post detection ones - only detectable after the impact?
tryignt o
09-16
Add new code to retry every few seconds.
Check on these status: vim logs/neo_analysis_36497725_4294967294.err
36497719 cosmology neo_anal qc59 R 3:38 1 dcc-cosmology-15
36497720 cosmology neo_anal qc59 R 3:38 1 dcc-cosmology-15
36497721 cosmology neo_anal qc59 R 3:38 1 dcc-cosmology-15
36497722 cosmology neo_anal qc59 R 3:38 1 dcc-cosmology-15
36497723 cosmology neo_anal qc59 R 3:38 1 dcc-cosmology-14
36497724 cosmology neo_anal qc59 R 3:38 1 dcc-cosmology-14
36497725 cosmology neo_anal qc59 R 3:38 1 dcc-cosmology-14
Submitted 100 k objects lol- not workign because of JPL refused request (too frequent)
Submitting parallel NEO analysis jobs:
Range: 2000 to 100000
Chunk size: 1000
Submitted job 36492776 for range 2000-2999
Submitted job 36492777 for range 3000-3999
Submitted job 36492778 for range 4000-4999
Submitted job 36492779 for range 5000-5999
Submitted job 36492780 for range 6000-6999
Submitted job 36492781 for range 7000-7999
Submitted job 36492782 for range 8000-8999
Submitted job 36492783 for range 9000-9999
Submitted job 36492784 for range 10000-10999
Submitted job 36492785 for range 11000-11999
Submitted job 36492786 for range 12000-12999
Submitted job 36492787 for range 13000-13999
Submitted job 36492788 for range 14000-14999
Submitted job 36492789 for range 15000-15999
Submitted job 36492790 for range 16000-16999
Submitted job 36492791 for range 17000-17999
Submitted job 36492792 for range 18000-18999
Submitted job 36492793 for range 19000-19999
Submitted job 36492794 for range 20000-20999
Submitted job 36492795 for range 21000-21999
Submitted job 36492796 for range 22000-22999
Submitted job 36492797 for range 23000-23999
Submitted job 36492798 for range 24000-24999
Submitted job 36492799 for range 25000-25999
Submitted job 36492800 for range 26000-26999
Submitted job 36492801 for range 27000-27999
Submitted job 36492802 for range 28000-28999
Submitted job 36492803 for range 29000-29999
Submitted job 36492804 for range 30000-30999
Submitted job 36492805 for range 31000-31999
Submitted job 36492806 for range 32000-32999
Submitted job 36492807 for range 33000-33999
Submitted job 36492808 for range 34000-34999
Submitted job 36492809 for range 35000-35999
Submitted job 36492810 for range 36000-36999
Submitted job 36492811 for range 37000-37999
Submitted job 36492812 for range 38000-38999
Submitted job 36492813 for range 39000-39999
Submitted job 36492814 for range 40000-40999
Submitted job 36492815 for range 41000-41999
Submitted job 36492816 for range 42000-42999
Submitted job 36492817 for range 43000-43999
Submitted job 36492818 for range 44000-44999
Submitted job 36492819 for range 45000-45999
Submitted job 36492820 for range 46000-46999
Submitted job 36492821 for range 47000-47999
Submitted job 36492822 for range 48000-48999
Submitted job 36492823 for range 49000-49999
Submitted job 36492824 for range 50000-50999
Submitted job 36492825 for range 51000-51999
Submitted job 36492826 for range 52000-52999
Submitted job 36492827 for range 53000-53999
Submitted job 36492828 for range 54000-54999
Submitted job 36492829 for range 55000-55999
Submitted job 36492830 for range 56000-56999
Submitted job 36492831 for range 57000-57999
Submitted job 36492832 for range 58000-58999
Submitted job 36492833 for range 59000-59999
Submitted job 36492834 for range 60000-60999
Submitted job 36492835 for range 61000-61999
Submitted job 36492836 for range 62000-62999
Submitted job 36492837 for range 63000-63999
Submitted job 36492838 for range 64000-64999
Submitted job 36492839 for range 65000-65999
Submitted job 36492840 for range 66000-66999
Submitted job 36492841 for range 67000-67999
Submitted job 36492842 for range 68000-68999
Submitted job 36492843 for range 69000-69999
Submitted job 36492844 for range 70000-70999
Submitted job 36492845 for range 71000-71999
Submitted job 36492846 for range 72000-72999
Submitted job 36492847 for range 73000-73999
Submitted job 36492848 for range 74000-74999
Submitted job 36492849 for range 75000-75999
Submitted job 36492850 for range 76000-76999
Submitted job 36492851 for range 77000-77999
Submitted job 36492852 for range 78000-78999
Submitted job 36492853 for range 79000-79999
Submitted job 36492854 for range 80000-80999
Submitted job 36492855 for range 81000-81999
Submitted job 36492856 for range 82000-82999
Submitted job 36492857 for range 83000-83999
Submitted job 36492858 for range 84000-84999
Submitted job 36492859 for range 85000-85999
Submitted job 36492860 for range 86000-86999
Submitted job 36492861 for range 87000-87999
Submitted job 36492862 for range 88000-88999
Submitted job 36492863 for range 89000-89999
Submitted job 36492864 for range 90000-90999
Submitted job 36492865 for range 91000-91999
Submitted job 36492866 for range 92000-92999
Submitted job 36492867 for range 93000-93999
Submitted job 36492868 for range 94000-94999
Submitted job 36492869 for range 95000-95999
Submitted job 36492870 for range 96000-96999
Submitted job 36492871 for range 97000-97999
Submitted job 36492872 for range 98000-98999
Submitted job 36492873 for range 99000-99999
Submitted 98 jobs total
Monitor jobs with: squeue -u qc59
Check logs in: logs/
parallel code for creating impactors through all neo population
30 min for 2000 obj
synthetic_impactors_multi_submit.sh: the sh file to split the jobs, and run the sbatch synthetic_impactors_multi_work.sh command
synthetic_impactors_multi_work.sh: the sh file that sbatch works on.
To run for multiple jobs: (start, end, trunk size)
./synthetic_impactors_multi_submit.sh 0 2000 1000
It will print:
Submitting parallel NEO analysis jobs:
Range: 0 to 2000
Chunk size: 1000
Submitted job 36485280 for range 0-999
Submitted job 36485281 for range 1000-1999
Submitted 2 jobs total
Monitor jobs with: squeue -u qc59
Check logs in: logs/
And the log file has the name: vim logs/neo_analysis_36485280_4294967294.log
09-15
impacting
related files:

09-14
It's happening!
Found some errors and fixed them:
- Somewhere I mistakenly put omega (argument of Pericenter) as Omega (node);
- wrapped ecliptic longitude in a wrong way that I’m finding Earth at opposition than at the same place;
- and finding Earth to be at the crossing position closest to the survey window than to my t_node guessing time
And did some improvements:
- Adding the constraints of survey start/end window, so I’m making sure they collide within expected time window.
- I also restructured my code in a set of .py files so it’s very convenient to run through all potential objects
Now my synthetic objects are all very good (no 0.6 au issue, and all of them are around 0.03 - 0.01 AU away when approaching Earth)
Submitting to dcc to run through lsst oboservation and argus:
first one on node 15 is for argus
second one on node 04 is for lsst (full 10 year)
36291407 cosmology sorcha qc59 R 0:05 1 dcc-cosmology-15
36291022 cosmology sorcha qc59 R 5:44 1 dcc-cosmology-04
09-12
JPL time system:
- UT is Universal Time. This can mean one of two non-uniform time-scales based on the rotation of the Earth. For this program, prior to 1962, UT means UT1. After 1962, UT means UTC or “Coordinated Universal Time”. Future UTC leap-seconds are not known yet, so the closest known leap-second correction is used over future time-spans.
Horizons System
09-10
JPL:
needs Ma, or Tp

how close does an asteroid turn into something hitting Earth?
Gravitational Sphere of Influence
Rebound
How often does collision happen? Does the simulation take into account for this?
09-05
sbatch --array=0-3 multi_sorcha.sh 100 32
Revised my .sh file, so that the log file/output file won’t overlap
Job id: 35386350: 9000 objects from the >9 yr detections, parellel running sbatch --array=0-3 multi_sorcha.sh 100 32
--chunksize $(($1 * $2)) \
--norbits $1 \
--cores $2 \
2 hours for 12,000
500 * 2 hours for all 6 million
1000 hours/24 =
09-04
sbatch --array=0-3 multi_sorcha.sh 100 32
if memory set to be 160, some jobs will fail.
Now assigning 260 G memory and try
6 hours for non-parallel - 1000 objects, 1 yr
submitted dcc job to run sorcha
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
35332586 cosmology sorcha qc59 R 0:05 1 dcc-cosmology-04
35332329 cosmology sorcha qc59 R 5:32 1 dcc-cosmology-04
09-02
Now it returns detections!! Perfect!!
But there will also be cases where brightness can be a limitation for Argus.
We should argue this balance.
08-29
from Jake:
adam core for impact prediction
b612 adam core
GitHub - B612-Asteroid-Institute/adam_core: Astrodynamics utilities used by the Asteroid Institute
Asteroid Institute - adam_core
To generate our own drawing of asteroids:
https://www2.boulder.swri.edu/~davidn/NEOMOD_Simulator/
Jake's talk
Sednoids
U band is not good at all, y is not good too
Has this been validated
sorcha run -c sorcha_config_demo.ini -p sspp_testset_colours.txt --orbits sspp_testset_orbits.des --pointing-db baseline_v2.0_1yr.db -o ./ -t testrun_e2e --ew output_full
08-26

square degree vs steradians.
Steradians, Radians Squared, Degrees Squares as Solid Angles - YouTube
Trying to derive brightness
- r - band for now, or I need color (i-x, r-x, etc)

08-21
Many visualizations.

08-16
Can I also include discovery ra and dec in my analysis?


08-01
Sorcha simulation:
sorcha run -c sorcha_config_demo.ini -p sspp_testset_colours.txt --orbits sspp_testset_orbits.des --pointing-db baseline_v2.0_1yr.db -o ./ -t testrun_e2e --stats testrun_stats
sorcha run -c Rubin_circular_approximation.ini -p 2024YR4_colours.txt --orbits 2024YR4_orbits.des --pointing-db ../baseline_v2.0_1yr.db -o ./ -t testrun_e2e --stats testrun_stats
Missing color information
07-31

07-30
filter out ddf objects

Deep Drilling Fields — Observing Strategy
07-29
converting from Zenith/amunith and RA and Dec

DCR effect:
toward Zenith: plus offset value
Bluer star: smaller g value

07-21
Trogen does not have albedo information, thus can't derive diameter information.
neomod/- NEO population model inputss3m/- Main Belt Asteroid model inputscfeps/- TNO model inputs (CFEPS-L7)trojanmod/- Jupiter Trojan model inputshildamod/- Hilda asteroid model inputs
07-15
"
With the object so close to Earth, the parallax of different observers on different parts of the globe allowed much greater precision than is usual
"
Fireballs
| Asteroid | Size (m) | Time Before Impact | Detection Method | Notes |
|---|---|---|---|---|
| 2008 TC3 | ~4 m | ~19 hours | Optical telescope (Catalina Sky Survey) | First ever detected before impact |
| 2014 AA | ~2-3 m | ~21 hours | Catalina Sky Survey | Impacted over Atlantic; no meteorite recovered |
| 2018 LA | ~3 m | ~8 hours | Catalina Sky Survey | Exploded over Botswana; meteorites recovered |
| 2022 EB5 | ~2 m | ~2 hours | Piszkéstető Observatory (Hungary) | Impacted over the Arctic Ocean |
| 2023 CX1 | ~1 m | ~7 hours | Observatoire de Paris (by amateur observer) | Tracked in real time to fireball over France |
detection phases
| Detection Phase | Method | Relevant Paper |
|---|---|---|
| Pre-impact (space) | Sky surveys + MOPS + Scout | Jenniskens 2009, Farnocchia 2016 |
| Atmospheric entry | All-sky cameras, satellites, infrasound | Brown 2002, Devillepoix 2020, Colas 2020 |
| Orbit + recovery | Tracklets + triangulation + modeling | Jenniskens, FRIPON, DFN papers (more on atmosphere entry phase) |
Pre-impact phase:
| Year | Leading Survey |
|---|---|
| Early 2000s | LINEAR |
| 2010–2014 | Catalina Sky Survey |
| 2015–Now | Pan-STARRS (now leads discoveries) |
| Special Role | NEOWISE detects many dark NEAs |
current surveys
| Survey Name | Location | Operator | Key Features |
|---|---|---|---|
| Catalina Sky Survey (CSS) | Arizona, USA | Univ. of Arizona / NASA | Most successful in discovering NEAs, discovered 2008 TC3, 2018 LA |
| Pan-STARRS (1 & 2) | Hawaii, USA | Univ. of Hawaii / NASA | Deep, wide-field imaging; largest number of NEA discoveries since ~2015 |
| ATLAS (Asteroid Terrestrial-impact Last Alert System) | Hawaii, Chile, South Africa | Univ. of Hawaii / NASA | Full-sky every night; designed for days-to-hours impact alerts |
| ZTF (Zwicky Transient Facility) | California, USA | Caltech | Fast, wide-field transient survey; not NEA-focused but still contributes |
| LINEAR (Lincoln Near-Earth Asteroid Research) | New Mexico, USA | MIT Lincoln Lab / USAF | Dominant NEA discoverer in early 2000s (now retired) |
| Spacewatch | Arizona, USA | Univ. of Arizona | Pioneering NEA survey in the 1990s; still active |
| NEOWISE (space-based) | Earth orbit | NASA JPL | Infrared detection; identifies dark asteroids invisible to visible-light telescopes |
space based or planned
| Mission | Status | Key Role |
|---|---|---|
| NEOWISE | Active (reactivated WISE mission) | Infrared detection of dark NEAs |
| NEOCam / NEO Surveyor | Launch ~2027 (planned) | Will detect NEAs from infrared space observatory — better for spotting sunward NEAs |
| Sentinel (B612 Foundation) | Canceled | Proposed space-based NEO detector |
| Hera (ESA) | 2027 (planetary defense mission) | Will visit Dimorphos post-DART mission for impact study, not a survey |
07-11
Different phase function:
| Model | Phase Function Φ(α) | Notes |
|---|---|---|
| Lambert | sinα+(π−α)cosαπ | Isotropic scattering |
| Lommel–Seeliger | Requires integrating μ0μ0+μ | Better for dark bodies |
| H–G | (1−G)Φ1(α)+GΦ2(α) | Empirical model |
| Hapke | Full integral over bidirectional reflectance | Physical, complex |
| Linear | Φ≈10−0.4βα | Simple approximation |

07 - 10
The distribution of magnitude change over a course of time
How is fireball asteroid detected?
Does Sorcha include the detection limit of LSST? or just simulated for all asteroids?
07-07


07-06
Study overview of asteroids
Trojans asteroids, main belt.
Laguangian points

06-24
very interesting asteroid discovery rate video
Asteroid Discovery From 1980 - 2010 - YouTube
And some interesting note by Dr Richard Miles
Asteroids & Remote Planets – British Astronomical Association
The best time to observe is within a month or two of the opposition date listed above. This is when objects are brightest. If a target object is a slow rotator, it is necessary to extend coverage by several months at least. Asteroids pass through two retrograde points: one before opposition and one after opposition, and for a week or so they move more slowly than usual when the same reference stars can be used for a week or more. This means we can obtain more accurate photometry and solve low-amplitude rotators. Finally, another sweet spot is to observe an object when its phase angle changes very little since then we do not need to know how the correction for changing phase angle affects its brightness, i.e. quite opposite to an object within a few days either side of opposition where the phase brightening can be much greater than the rotational amplitude of its lightcurve.
A usful website hub:
Table of Contents
06-23
An asnwer found here:
Questions regarding MOPS asteroid linking algorithm and whether broken links are represented in DP03 - Support - Rubin Observatory LSST Community forum
Thanks for your questions, @ewhite42.
- You are correct that the HelioLinC3D software package will be used for tracklet linking and orbit fitting for Rubin. There’s a description of the linking process in the DP0.3 documentation 2.
- The DP0.3 data set is composed of catalogs containing real and simulated solar system and interstellar objects. You can find more information in the DP0.3 Simulation documentation, including a list of known issues with the simulated data set. No cases of broken links as you describe have been reported in DP0.3.
I now have a better sense of their pipeline
This is a good slide:
The Solar System Processing (SSP) Pipeline — Rubin Observatory DP0.3

Trying their linking package:
It's a cpp package, some error when installing on macos:
If you don't need parallel processing for testing, you can simply remove the -fopenmp flag from the Makefile changed
Heliolinnk worked!!
A useful summary of how to use this package

Results from the test data.



06-22
Live asteroid tracking of 2024 YR4
Asteroid (NEO) 2024 YR4 | TheSkyLive
06-19
Detection limit:
m_stationary - detal_m (due to trail loss)
detal_m:
SMTN-003: Trailing Losses for Moving Objects
cneos.jpl.nasa.gov/doc/JPL_Pub_16-11_LSST_NEO.pdf
Synthetic tracking: tells about how synthetic tracking helps with the trail loss? arxiv.org/pdf/2401.03255
LSST detection strategy: faculty.washington.edu/ivezic/Publications/Jones2018.pdf
"The second effect, detection loss, occurs because
source detection software is optimized for detecting point sources;
a stellar PSF-like matched filter is used when identifying sources
that pass above the defined threshold. This filter is non-optimal
for trailed objects but losses can be mitigated with improved
software (e.g. detecting to a lower PSF-based SNR threshold and
then using a variety of trailed PSF filters to detect sources)."
When
considering whether a source would be detected at a given SNR
using typical source detection software, the sum of SNR trailing
and detection losses should be used. With an improved algorithm
optimized for trailed sources (implying additional scope for LSST
data management), the smaller SNR losses should be used instead.
- Does LSST now has a pipeline improved for asteroid detection?
- Fading limiting magnitude?

06-17
Definition of magnitude
Horizon:
'APmag, S-brt,' =
The asteroids' approximate apparent airless visual magnitude and surface
brightness using the standard IAU H-G system magnitude model:
APmag = H + 5*log10(delta) + 5*log10(r) - 2.5*log10((1-G)*phi_1 + G*phi_2)
06-15
Asteroid detection algorithms
Machine Learning:
Non-sidereal tracking:
[2306.16519] Astreaks: Astrometry of NEOs with trailed background stars
06-05

06-04
The blue dots are actually coming from panstarrs: which means I'm having extended data from panstarrs that may not be neccesary
check columns of the data to find usful information
table columns:
The Pan-STARRS1 Database and Data Products - IOPscience
ObjectQualityFlags to filter out flab == 64 and flag == 128

Try to filter out good quality panstarrs query in analysze_panstarr.ipynb
New plot

Red is from argus sextractor, the dots are matched sources.

matched sources are saved under:
'matched_sources.csv'
06-03
Find out the limit of the argus array image:
source detection of the current image
zero-mag correction from pansstar
find the limit of the magnitude of argus array
some interesting video about argus array: The Argus Array (Hank Corbett) - YouTube
Using SExtractor:
config file
parameter file
A2TD uses: Atik Cameras Apx60
Key specifications from the search results:
- Sensor: Sony IMX 455 CMOS sensor with 61.17 MP (9576 × 6380 pixels) AstronomytechnologytodayTelescopes
- Pixel size: 3.76 μm AstronomytechnologytodayTelescopes
- Read noise: Very low at 1.2 e- (much lower than typical CCDs at 6 e-) ATIK - Apx60 Monochrome CMOS (IMX 455) Camera | Telescopes
- Bit depth: 16-bit (65,536 gray levels) ATIK - Apx60 Monochrome CMOS (IMX 455) Camera | Telescopes
- Cooling: Up to -35°C delta TelescopesClpmag
For SExtractor configuration:
- Keep
DETECT_TYPE = CCD- this setting works correctly for both CCD and CMOS cameras in linear mode - The 16-bit depth confirms saturation around 65,535 ADU
- The very low read noise (1.2 e-) means your detection thresholds can be quite sensitive
Still need to determine:
- Gain: Likely around 1.0-2.0 e-/ADU for modern CMOS sensors
- Magnitude zero point: Will need photometric calibration
Some interesting extra setup for experiment (at the end of this config file)
#-------------------------------- Catalog ------------------------------------
CATALOG_NAME output.cat # name of the output catalog
CATALOG_TYPE ASCII_HEAD # NONE,ASCII,ASCII_HEAD, ASCII_SKYCAT,
# ASCII_VOTABLE, FITS_1.0 or FITS_LDAC
PARAMETERS_NAME default.param # name of the file containing catalog contents
#------------------------------- Extraction ----------------------------------
DETECT_TYPE CCD # CCD (linear) or PHOTO (with gamma correction) - use CCD for CMOS too
DETECT_MINAREA 5 # minimum number of pixels above threshold
DETECT_THRESH 1.5 # <sigmas> or <threshold>,<ZP> in mag.arcsec-2
ANALYSIS_THRESH 1.5 # <sigmas> or <threshold>,<ZP> in mag.arcsec-2
FILTER Y # apply filter for detection (Y or N)?
FILTER_NAME default.conv # name of the file containing the filter
DEBLEND_NTHRESH 32 # Number of deblending sub-thresholds
DEBLEND_MINCONT 0.005 # Minimum contrast parameter for deblending
CLEAN Y # Clean spurious detections? (Y or N)?
CLEAN_PARAM 1.0 # Cleaning efficiency
MASK_TYPE CORRECT # type of detection MASKing: can be one of
# NONE, BLANK or CORRECT
#------------------------------ Photometry -----------------------------------
PHOT_APERTURES 5,10,20 # MAG_APER aperture diameter(s) in pixels
PHOT_AUTOPARAMS 2.5, 3.5 # MAG_AUTO parameters: <Kron_fact>,<min_radius>
PHOT_PETROPARAMS 2.0, 3.5 # MAG_PETRO parameters: <Petrosian_fact>,
# <min_radius>
PHOT_AUTOAPERS 0.0,0.0 # <estimation>,<measurement> minimum apertures
# for MAG_AUTO and MAG_PETRO
SATUR_LEVEL 65535.0 # level (in ADUs) at which arises saturation (estimated for 16-bit)
SATUR_KEY SATURATE # keyword for saturation level (in ADUs)
MAG_ZEROPOINT 0.0 # magnitude zero-point (will be calculated from data)
MAG_GAMMA 4.0 # gamma of emulsion (for photographic scans)
GAIN 1.0 # detector gain in e-/ADU (needs calibration data)
GAIN_KEY GAIN # keyword for detector gain in e-/ADU
PIXEL_SCALE 0 # size of pixel in arcsec (0=use FITS WCS info)
#------------------------- Star/Galaxy Separation ----------------------------
SEEING_FWHM 1.2 # stellar FWHM in arcsec
STARNNW_NAME default.nnw # Neural-Network_Weight table filename
#------------------------------ Background -----------------------------------
BACK_TYPE AUTO # AUTO or MANUAL
BACK_VALUE 0.0 # Default background value in MANUAL mode
BACK_SIZE 64 # Background mesh: <size> or <width>,<height>
BACK_FILTERSIZE 3 # Background filter: <size> or <width>,<height>
BACKPHOTO_TYPE GLOBAL # can be GLOBAL or LOCAL
BACKPHOTO_THICK 24 # thickness of the background LOCAL annulus
BACK_FILTTHRESH 0.0 # Threshold above which the background-
# map filter operates
#------------------------------ Check Image ----------------------------------
CHECKIMAGE_TYPE NONE # can be NONE, BACKGROUND, BACKGROUND_RMS,
# MINIBACKGROUND, MINIBACK_RMS, -BACKGROUND,
# FILTERED, OBJECTS, -OBJECTS, SEGMENTATION,
# or APERTURES
CHECKIMAGE_NAME check.fits # Filename for the check-image
#--------------------- Memory (change with caution!) -------------------------
MEMORY_OBJSTACK 3000 # number of objects in stack
MEMORY_PIXSTACK 300000 # number of pixels in stack
MEMORY_BUFSIZE 1024 # number of lines in buffer
#------------------------------- ASSOCiation ---------------------------------
ASSOC_NAME sky.list # name of the ASCII file to ASSOCiate
ASSOC_DATA 2,3,4 # columns of the data to replicate (0=all)
ASSOC_PARAMS 2,3,4 # columns of xpos,ypos[,mag]
ASSOC_RADIUS 2.0 # cross-matching radius (pixels)
ASSOC_TYPE NEAREST # ASSOCiation method: FIRST, NEAREST, MEAN,
# MAG_MEAN, SUM, MAG_SUM, MIN or MAX
ASSOCSELEC_TYPE MATCHED # ASSOC selection type: ALL, MATCHED or -MATCHED
#----------------------------- Miscellaneous ---------------------------------
VERBOSE_TYPE NORMAL # can be QUIET, NORMAL or FULL
HEADER_SUFFIX .head # Filename extension for additional headers
WRITE_XML N # Write XML file (Y/N)?
XML_NAME sex.xml # Filename for XML output
XSL_URL file:///usr/local/share/sextractor/sextractor.xsl
# Filename for XSL style-sheet
NTHREADS 1 # 1 single thread
FITS_UNSIGNED N # Treat FITS integer values as unsigned (Y/N)?
INTERP_MAXXLAG 16 # Max. lag along X for 2nd-order interpolation
INTERP_MAXYLAG 16 # Max. lag along Y for 2nd-order interpolation
INTERP_TYPE ALL # Interpolation type: NONE, VAR_ONLY or ALL
#--------------------------- Experimental Stuff -----------------------------
PSF_NAME default.psf # File containing the PSF model
PSF_NMAX 1 # Max.number of PSFs fitted simultaneously
PATTERN_TYPE RINGS-HARMONIC # can RINGS-QUADPOLE, RINGS-OCTOPOLE,
# RINGS-HARMONIC
SOM_NAME default.som # File containing Self-Organizing Map weights


06-01
Calculating the theoretical length of the streak:
The START and END time does not match the ExpTime:
UTCSTART= '2025-05-22T08:48:50.608997'
UTCEND = '2025-05-22T08:49:55.176073'
STREAMIX= 0
RATCHNUM= '20250522_084537'
IMGTYPE = 'sci '
TARGET = '2003_HB '
TARGRA = 19.557599999999997
TARGDEC = 75.39122222222223
EXPTIME = 60
12.924517329611536, pixels.
05-29
image tools to check the brightness of a specific position:
pan-starrs image query
PanSTARRS Image Access
sdss navigator dr17
SDSS DR16 Navigate Tool
Background subtraction and source detection

trying to see if there are some visible streaks

Adding background subtraciton makes the whole process slower.
First trial of source detection:

Wrote a function of
05-28
pixel scale of the images:
{'pc_matrix_x': np.float64(1.433063198455264),
'pc_matrix_y': np.float64(1.433063198455264),
'pc_matrix_mean': np.float64(1.433063198455264)}
1.43 arcsec/pixel
calculated by
if 'PC1_1' in img_header and 'CDELT1' in img_header:
pc1_1 = img_header['PC1_1']
pc1_2 = img_header.get('PC1_2', 0)
pc2_1 = img_header.get('PC2_1', 0)
pc2_2 = img_header['PC2_2']
cdelt1 = abs(img_header['CDELT1'])
cdelt2 = abs(img_header['CDELT2'])
# Calculate pixel scales
pixel_scale_x = cdelt1 * np.sqrt(pc1_1**2 + pc2_1**2) * 3600
pixel_scale_y = cdelt2 * np.sqrt(pc1_2**2 + pc2_2**2) * 3600
results['pc_matrix_x'] = pixel_scale_x
results['pc_matrix_y'] = pixel_scale_y
results['pc_matrix_mean'] = (pixel_scale_x + pixel_scale_y) / 2
05-26
Target RA is in hours?
In the FITS header:
TARGRA = 19.557599999999997(this appears to be in hours)TARGDEC = 75.39122222222223(this is in degrees)
But the WCS system expects both RA and Dec to be in degrees. The actual image center from the WCS is:
CRVAL1 = 293.28607197224(RA in degrees)CRVAL2 = 75.537303805773(Dec in degrees)
05-18
Some resources:
Projects
Software pipeline
The sky at one terabit per second: architecture and implementation of the Argus Array Hierarchical Data Processing System