profile
viewpoint
Fatemeh Zamanian fazamani Microsoft Houston @Azure @microsoft

microsoft/seismic-deeplearning 125

Deep Learning for Seismic Imaging and Interpretation

microsoft/AML_Interpret_usecases 2

AML Interpret usecases demo

fazamani/2016-ml-contest 0

Machine learning contest - October 2016 TLE

fazamani/seismic-deeplearning 0

Deep Learning for Seismic Imaging and Interpretation

delete branch microsoft/seismic-deeplearning

delete branch : docker_test

delete time in 10 days

issue closedmicrosoft/seismic-deeplearning

Add docker tests - ensures docker image doesn't break while new features are added

Add a new job (in parrallel with the setup job) to build and run the docker image. We can do this this for both local and distributed train, in addition to running the notebooks and testing a pretrained model.

closed time in 10 days

yalaudah

push eventmicrosoft/seismic-deeplearning

Fatemeh Zamanian

commit sha 9f7774c9d9eb4582acbe94750e937de4db647a93

docker build test (#388) * added a new job to test bulding the docker, for now it is daisy-chained to the end * this is just a TEST * test * test * remove old image * debug * debug * test * debug * enabled all the jobs * quick fix * removing non-tagged iamges Co-authored-by: maxkazmsft <maxkaz@microsoft.com>

view details

push time in 10 days

push eventmicrosoft/seismic-deeplearning

maxkazmsft

commit sha eee7dd2ee987cf1bf19d5a813165ab2d831eacfd

closes 385 (#389)

view details

yalaudah

commit sha 21262504236ac4d9ca875c25ab419ac8d51effdb

Fixing #259 by adding symmetric padding along depth direction (#386)

view details

Fatemeh

commit sha 641d0162a7a8e58b05ea2b3613a52f4235a609a6

Merge branch 'staging' into docker_test

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 3a4f1dbb3130dcc57b357ac2c47be6bac40a31db

removing non-tagged iamges

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha a66a5c5e9593cede89aa8e150670830a47c6af46

quick fix

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha b93c744bcb00fe040e38016b8ea8a1808598e9b8

enabled all the jobs

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 608bd166528f871c83d231ed43ea61a1572330cf

debug

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha cd0db2fe9752c01badf3adbc5e239db227d09331

test

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 01897fa04e38f25c985a3f6f609e50d5cdfae9a7

debug

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 59555818cd7886ae7ce3424126695468bf3ca366

debug

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 12adcb540e895089d8114bb67d61d8bc9be7a11b

remove old image

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 6790b2c746f17e9c002a3706f44f35d5ee6e3ba8

test

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 318d24d082d02602459f7444877edf0b88d777a2

test

view details

push time in 11 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha aa7d06c69800ee71f715d1a152d99181dce85cef

this is just a TEST

view details

push time in 11 days

issue commentmicrosoft/seismic-deeplearning

Add docker tests - ensures docker image doesn't break while new features are added

Due to time constraint and hardware limitation (build VM has single GPU), this issue is de-scoped to the following:

  • Add a job daisy-changed to the last job in the main_build to test that the docker builds successfully.
  • We will revisit the parallel jobs run in a later time.

@maxkazmsft

yalaudah

comment created time in 11 days

create barnchmicrosoft/seismic-deeplearning

branch : docker_test

created branch time in 11 days

delete branch microsoft/seismic-deeplearning

delete branch : docker-multi-gpu

delete time in 12 days

push eventmicrosoft/seismic-deeplearning

Fatemeh Zamanian

commit sha 8074a574491eafc1351e8d67c861bdf1de384b07

changed tensorflow pinned version (#387) * changed tensorflow pinned version * trigger build

view details

push time in 12 days

PR merged microsoft/seismic-deeplearning

Reviewers
changed tensorflow pinned version

this is to fix the problem of running tensorboard!

+2 -3

0 comment

3 changed files

fazamani

pr closed time in 12 days

push eventmicrosoft/seismic-deeplearning

Fatemeh Zamanian

commit sha a89a6ab2447aa7a89512dbe2450deacef2caaf09

trigger build

view details

push time in 13 days

PR opened microsoft/seismic-deeplearning

changed tensorflow pinned version

this is to fix the problem of running tensorboard!

+2 -2

0 comment

2 changed files

pr created time in 13 days

issue commentmicrosoft/seismic-deeplearning

enable multi-GPU training with Docker image - facilitates multi-GPU training on any Linux OS

@maxkazmsft @yalaudah closing this out upon successful test of the followings:

  • All GPUs from base VM are exposed to docker!
  • The distribute-training runs as designed within the docker!
maxkazmsft

comment created time in 13 days

create barnchmicrosoft/seismic-deeplearning

branch : docker-multi-gpu

created branch time in 13 days

delete branch microsoft/seismic-deeplearning

delete branch : patch_deconvnet

delete time in 18 days

push eventmicrosoft/seismic-deeplearning

Fatemeh Zamanian

commit sha c7fb6ae53ba840e0222b0fc0ef26e3fe326debde

PR to fix #371 and #372 (#380) * added learning rate to logs * changed epoch for patch_deconvnet, and enabled the tests * removed TODOs

view details

push time in 18 days

PR merged microsoft/seismic-deeplearning

PR to fix #371 and #372

PR to fix #371 and #372

+4 -5

0 comment

2 changed files

fazamani

pr closed time in 18 days

startedmicrosoft/seismic-deeplearning

started time in 18 days

PR opened microsoft/seismic-deeplearning

PR to fix #371 and #372

PR to fix #371 and #372

+4 -5

0 comment

2 changed files

pr created time in 19 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha bf4585ddcaaa2f79499f5bb3f2b79273ac255478

changed epoch for patch_deconvnet, and enabled the tests

view details

Fatemeh

commit sha 3228ad8bff334bde232c1c6f7721ab9e3f55a778

removed TODOs

view details

push time in 19 days

create barnchmicrosoft/seismic-deeplearning

branch : patch_deconvnet

created branch time in 19 days

delete branch microsoft/seismic-deeplearning

delete branch : data-flow-tests

delete time in 23 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha cf5473af4b825343a002ff67e5b7878b4d20a4f4

Data flow tests (#375) * renamed checkerboard job name * restructured default outputs from test.py to be dumped under output dir and not debug dir * test.py output re-org * removed outdated variable from check_performance.py * intermediate work * intermediate work * bunch of intermediate works * changing args for different trainings * final to run dev_build" * remove print statements * removed print statement * removed suppressed lines * added assertion error msg * added assertion error msg, one intential bug to test * testing a stupid bug * debug * omg * final * trigger build

view details

push time in 23 days

PR merged microsoft/seismic-deeplearning

Data flow tests

this addresses #317

+270 -28

0 comment

6 changed files

fazamani

pr closed time in 23 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha ebaf999fe71863f955aaa3fe47f528ff15bdf6f0

trigger build

view details

push time in 23 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 2a72450154ecc56303fc4aeff0ba00d0980c199f

final

view details

push time in 23 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha e2a5ae842620a04fc6e3a165e1579cb1d93d5801

omg

view details

push time in 23 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 1c14a91a3d63809b22dc4af13dee2e466ec2b3e3

debug

view details

push time in 23 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 4f35e8d1d7ea3a5d6e329db82a9cbf248b828770

Multi-GPU training support (#359)

view details

Fatemeh

commit sha 6d50d2b2cf710dea9adb9b9e8eb8cf809de936bf

staging merge

view details

push time in 23 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 5594a2a6f6813b0ab628f37c76b7b215e032559e

testing a stupid bug

view details

push time in 23 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 7b07598c2fcc089eb69b932cf7d2c7a361ede692

removed suppressed lines

view details

Fatemeh

commit sha 08681660ec0bd66fbc97ad9ba4e4973755fa4a4e

added assertion error msg

view details

Fatemeh

commit sha dad93a5bb55a9db5d99a4c24222f056a454e61b7

added assertion error msg, one intential bug to test

view details

push time in 23 days

Pull request review commentmicrosoft/seismic-deeplearning

Data flow tests

 def _evaluate_split(      n_classes = test_set.n_classes +    if debug:+        data_flow[split] = dict()+        data_flow[split]['test_section_loader_length'] = len(test_set)+        data_flow[split]['test_input_shape'] = test_set.seismic.shape+        data_flow[split]['test_label_shape'] = test_set.labels.shape+        data_flow[split]['n_classes'] = n_classes++     test_loader = data.DataLoader(test_set, batch_size=1, num_workers=config.WORKERS, shuffle=False)      if debug:+        data_flow[split]['test_loader_length'] = len(test_loader)         logger.info("Running in Debug/Test mode")-        test_loader = take(2, test_loader)+        take_n = 2+        test_loader = take(take_n, test_loader)+        data_flow[split]['take_n_sections'] = take_n

I use this to test the length of predictions, labels, and images coming from model evaluation!

fazamani

comment created time in 23 days

Pull request review commentmicrosoft/seismic-deeplearning

Data flow tests

 def _evaluate_split(      n_classes = test_set.n_classes +    if debug:+        data_flow[split] = dict()+        data_flow[split]['test_section_loader_length'] = len(test_set)+        data_flow[split]['test_input_shape'] = test_set.seismic.shape+        data_flow[split]['test_label_shape'] = test_set.labels.shape+        data_flow[split]['n_classes'] = n_classes++     test_loader = data.DataLoader(test_set, batch_size=1, num_workers=config.WORKERS, shuffle=False)      if debug:+        data_flow[split]['test_loader_length'] = len(test_loader)         logger.info("Running in Debug/Test mode")-        test_loader = take(2, test_loader)+        take_n = 2+        test_loader = take(take_n, test_loader)+        data_flow[split]['take_n_sections'] = take_n+        pred_list, gt_list, img_list = [], [], []+      try:         output_dir = generate_path(-            f"debug/{config.OUTPUT_DIR}_test_{split}", git_branch(), git_hash(), config.MODEL.NAME, current_datetime(),+            f"{config.OUTPUT_DIR}/test/{split}", git_branch(), git_hash(), config.MODEL.NAME, current_datetime(),         )     except:-        output_dir = generate_path(f"debug/{config.OUTPUT_DIR}_test_{split}", config.MODEL.NAME, current_datetime(),)+        output_dir = generate_path(f"{config.OUTPUT_DIR}/test/{split}", config.MODEL.NAME, current_datetime(),)      running_metrics_split = runningScore(n_classes)+         # evaluation mode:     with torch.no_grad():  # operations inside don't track history         model.eval()-        total_iteration = 0+        # total_iteration = 0

yes, I am pushing a commit that will remove this :)

fazamani

comment created time in 23 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 1b81e83516bed2ef20f0bf9e8ccf81f806c795c8

removed print statement

view details

push time in 24 days

PR opened microsoft/seismic-deeplearning

Data flow tests

this addresses #317

+205 -33

0 comment

6 changed files

pr created time in 24 days

push eventmicrosoft/seismic-deeplearning

maxkazmsft

commit sha 5318b332ed33b35e5897ee37312c9d07882d2ff9

Replace HRNet with SEResNet model in the notebook (#362) * replaced HRNet with SEResNet model in the notebook * removed debugging cell info * fixed bug where resnet_unet model wasn't loading the pre-trained version in the notebook * fixed build VM problems

view details

Fatemeh

commit sha 634604811b0ca870eaccfcedd0fc1192a27e7fbd

Merge branch 'staging' into data-flow-tests

view details

push time in 24 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha f4de533d313643c6a0f786b4a51794943e0f9246

remove print statements

view details

push time in 24 days

create barnchmicrosoft/seismic-deeplearning

branch : data-flow-tests

created branch time in 24 days

delete branch microsoft/seismic-deeplearning

delete branch : simplify_tests

delete time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha d8092d034a2ff7ffc6a4a3e718c0b2c5b8884c5e

make tests simpler (#368) * removed Dutch F3 job from main_build * fixed a bug in data subset in debug mode * modified epoch numbers to pass the performance checks, checkedout check_performance from Max's branch * modified get_data_for_builds.sh to set up checkerboard data for smaller size, minor improvements on gen_checkerboard * send all the batches, disabled the performance checks for patch_deconvnet * added comment to enable tests for patch_deconvnet after debugging, renamed gen_checkerboard, added options to new arg per Max's suggestion

view details

push time in a month

PR merged microsoft/seismic-deeplearning

make tests simpler

This closes #346

performance checks for patch_deconvnet is disabled (investigating a performance issue)

+85 -190

0 comment

4 changed files

fazamani

pr closed time in a month

issue openedmicrosoft/seismic-deeplearning

investigate the reproducibility problem with patch_deconvnet for small training data set

there is a reproducibility problem for patch_deconvnet, for the small data set! These are the same experiments with very different metrics:

image

created time in a month

issue openedmicrosoft/seismic-deeplearning

Add data QC module

Add a module to do quality checks on the data, e.g. check if the data is within a pre-specified lower and upper bound!

created time in a month

Pull request review commentmicrosoft/seismic-deeplearning

make tests simpler

 jobs:       echo "PASSED"  -###################################################################################################-# Stage 3: Dutch F3 patch models: deconvnet, unet, HRNet patch depth, HRNet section depth-# CAUTION: reverted these builds to single-GPU leaving new multi-GPU code in to be reverted later-###################################################################################################--- job: dutchf3_patch-  dependsOn: checkerboard_dutchf3_patch-  timeoutInMinutes: 60-  displayName: Dutch F3 patch local-  pool:-    name: deepseismicagentpool-  steps:-  - bash: |--      source activate seismic-interpretation--      # disable auto error handling as we flag it manually-      set +e--      cd experiments/interpretation/dutchf3_patch/local--      # Create a temporary directory to store the statuses-      dir=$(mktemp -d)--      pids=-      # export CUDA_VISIBLE_DEVICES=0-      { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \-                        'TRAIN.DEPTH' 'none' \-                        'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \-                        'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'no_depth' \-                        'WORKERS' 1 \-                        --cfg=configs/patch_deconvnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=1-      { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \-                        'TRAIN.DEPTH' 'section' \-                        'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \-                        'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \-                        'WORKERS' 1 \-                        --cfg=configs/unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=2-      { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \-                        'TRAIN.DEPTH' 'section' \-                        'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \-                        'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \-                        'WORKERS' 1 \-                        --cfg=configs/seresnet_unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=3-      { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \-                        'TRAIN.DEPTH' 'section' \-                        'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \-                        'MODEL.PRETRAINED' '/home/alfred/models/hrnetv2_w48_imagenet_pretrained.pth' \-                        'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \-                        'WORKERS' 1 \-                        --cfg=configs/hrnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"--      wait $pids || exit 1--      # check if any of the models had an error during execution-      # Get return information for each pid-      for file in "$dir"/*; do-        printf 'PID %d returned %d\n' "${file##*/}" "$(<"$file")"-        [[ "$(<"$file")" -ne "0" ]] && exit 1 || echo "pass"-      done--      # Remove the temporary directory-      rm -r "$dir"--      echo "All models finished training - start scoring"--      # Create a temporary directory to store the statuses-      dir=$(mktemp -d)--      pids=-      # export CUDA_VISIBLE_DEVICES=0-      # find the latest model which we just trained-      # if we're running on a build VM-      model_dir=$(ls -td output/patch_deconvnet/no_depth/* | head -1)-      # if we're running in a checked out git repo-      [[ -z ${model_dir} ]] && model_dir=$(ls -td output/$(git rev-parse --abbrev-ref HEAD)/*/patch_deconvnet/no_depth/* | head -1)-      model=$(ls -t ${model_dir}/*.pth | head -1)-      # try running the test script-      { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \-                       'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'no_depth' \-                       'TEST.MODEL_PATH' ${model} \-                       'WORKERS' 1 \-                       --cfg=configs/patch_deconvnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=1-      # find the latest model which we just trained-      # if we're running on a build VM-      model_dir=$(ls -td output/unet/section_depth/* | head -1)-      # if we're running in a checked out git repo-      [[ -z ${model_dir} ]] && model_dir=$(ls -td output/$(git rev-parse --abbrev-ref HEAD)/*/unet/section_depth* | head -1)-      model=$(ls -t ${model_dir}/*.pth | head -1)-      # try running the test script-      { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \-                       'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \-                       'TEST.MODEL_PATH' ${model} \-                       'WORKERS' 1 \-                       --cfg=configs/unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=2-      # find the latest model which we just trained-      # if we're running on a build VM-      model_dir=$(ls -td output/seresnet_unet/section_depth/* | head -1)-      # if we're running in a checked out git repo-      [[ -z ${model_dir} ]] && model_dir=$(ls -td output/$(git rev-parse --abbrev-ref HEAD)/*/seresnet_unet/section_depth/* | head -1)-      model=$(ls -t ${model_dir}/*.pth | head -1)-      # try running the test script-      { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \-                       'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \-                       'TEST.MODEL_PATH' ${model} \-                       'WORKERS' 1 \-                       --cfg=configs/seresnet_unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=3-      # find the latest model which we just trained-      # if we're running on a build VM-      model_dir=$(ls -td output/hrnet/section_depth/* | head -1)-      # if we're running in a checked out git repo-      [[ -z ${model_dir} ]] && model_dir=$(ls -td output/$(git rev-parse --abbrev-ref HEAD)/*/hrnet/section_depth/* | head -1)-      model=$(ls -t ${model_dir}/*.pth | head -1)-      # try running the test script-      { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \-                       'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \-                       'MODEL.PRETRAINED' '/home/alfred/models/hrnetv2_w48_imagenet_pretrained.pth' \-                       'TEST.MODEL_PATH' ${model} \-                       'WORKERS' 1 \-                       --cfg=configs/hrnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"--      # wait for completion-      wait $pids || exit 1--      # check if any of the models had an error during execution-      # Get return information for each pid-      for file in "$dir"/*; do-        printf 'PID %d returned %d\n' "${file##*/}" "$(<"$file")"-        [[ "$(<"$file")" -ne "0" ]] && exit 1 || echo "pass"-      done--      # Remove the temporary directory-      rm -r "$dir"--      echo "PASSED"

@yalaudah I mark this as resolved per our chat :)

fazamani

comment created time in a month

Pull request review commentmicrosoft/seismic-deeplearning

make tests simpler

 python prepare_dutchf3.py split_train_val patch   --data_dir=${DATA_F3} --label_ DATA_CHECKERBOARD="${DATA_CHECKERBOARD}/data" # repeat for checkerboard dataset python prepare_dutchf3.py split_train_val section --data_dir=${DATA_CHECKERBOARD} --label_file=train/train_labels.npy --output_dir=splits --split_direction=both-python prepare_dutchf3.py split_train_val patch   --data_dir=${DATA_CHECKERBOARD} --label_file=train/train_labels.npy --output_dir=splits --stride=50 --patch_size=100 --split_direction=both+python prepare_dutchf3.py split_train_val patch   --data_dir=${DATA_CHECKERBOARD} --label_file=train/train_labels.npy --output_dir=splits --stride=50 --patch_size=100 --split_direction=both --section_stride=100

@yalaudah I mark this as resolved per our chat :)

fazamani

comment created time in a month

push eventmicrosoft/seismic-deeplearning

maxkazmsft

commit sha dd6244fd9880ac38ac134d46126a96b1b8c0f6e3

merging work from CSE team into main staging branch (#357) * Adding content to interpretation README (#171) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * Adding readme text for the notebooks and checking if config is correctly setup * fixing prepare script example * Adding more content to interpretation README * Update README.md * Update HRNet_Penobscot_demo_notebook.ipynb Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * Updates to prepare dutchf3 (#185) * updating patch to patch_size when we are using it as an integer * modifying the range function in the prepare_dutchf3 script to get all of our data * updating path to logging.config so the script can locate it * manually reverting back log path to troubleshoot build tests * updating patch to patch_size for testing on preprocessing scripts * updating patch to patch_size where applicable in ablation.sh * reverting back changes on ablation.sh to validate build pass * update patch to patch_size in ablation.sh (#191) Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> * TestLoader's support for custom paths (#196) * Add testloader support for custom paths. * Add test * added file name workaround for Train*Loader classes * adding comments and clean up * Remove legacy code. * Remove parameters that dont exist in init() from documentation. * Add unit tests for data loaders in dutchf3 * moved unit tests Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * select contiguous data splits for val and train (#200) * select contiguous data splits for test and train * changed data-dir to data_dir as arg to prepare_dutchf3.py * update script with new required parameter label_file * ignoring split_alaudah_et_al_19 as it is not updated * changed TEST to VALIDATION for clarity in the code * included job to run scripts unit test * Fix val/train split and add tests * adjust to consider the whole horz_lines * update environment - gitpython version * Segy Converter Utility (#199) * Add convert_segy utility script and related notebooks * add segy files to .gitignore * readability update * Create methods for normalizing and clipping separately. * Add comment * update file paths * cleanup tests and terminology for the normalization/clipping code * update notes to provide more context for using the script * Add tests for clipping. * Update comments * added Microsoft copyright * Update root README * Add a flag to turn on clipping in dataprep script. * Remove hard coded values and fix _filder_data method. * Fix some minor issues pointed out on comments. * Remove unused lib. * Rename notebooks to impose order; set env; move all def funtions into utils; improve comments in notebooks; and include code example to run prepare_dutchf3.py * Label missing data with 255. * Remove cell with --help command. * Add notebooks to test pipeline. * grammer edits * update notebook output and utils naming * fix output dir error and cleanup notebook * fix yaml indent error in notebooks_build.yml * fix merge issues and job name errors * debugging the build pipeline * combine notebook tests for segy converter since they are dependent on each other Co-authored-by: Geisa Faustino <32823639+GeisaFaustino@users.noreply.github.com> * Azureml train pipeline (#195) * initial add of azure ml pipeline * update references and dependencies * fix integration tests * remove incomplete tests * add azureml requirements.txt for dutchf3 local patch and update pipeline config * add empty __init__.py to cv_lib dutchf3 * Get train,py to run in pipeline * allow output dir in train.py * Clean up README and __init__ * only pass output if available and use input dir for output in train.py * update comment in train.py * updating azureml_requirements to only pull from /master * removing windows guidance in azureml_pipelines/README.md * adding .env.example * adding azureml config example * updating documentation in azureml_pipelines README.md * updating main README.md to refer to AML guidance documentation * updating AML README.md to include additional guidance to cancel runs * adding documentation on AzureML pipelines in the AML README.me * adding files needed section for AML training run * including hyperlink in format poiniting to additional detail on Azure Machine Learning pipeslines in AML README.md * removing the mention of VSCode in the AML README.md * fixing typo * modifying config to pipeline configuration in README.md * fixing typo in README.md * adding documentation on how to create a blob container and copy data onto it * adding documentation on blob storage guidance * adding guidance on how to get the subscription id * adding guidance to activate environment and then run the kick off train pipeline from ROOT * adding ability to pass in experiement name and different pipeline configuration to kickoff_train_pipeline.py * adding Microsoft Corporation Copyright to kickoff_train_pipeline.py * fixing format in README.md * adding trouble shooting section in README.md for connection to subscription * updating troubleshooting title * adding guidance on how to download the config.json from the Azure Portal in the README.md * adding additional guidance and information on AzureML compute targets and naming conventions * changing the configuation file example to only include the train step that is currently supported * updating config to pipeline configuration when applicable * adding link to Microsoft docs for additional information on pipeline steps * updated AML test build definitions * updated AML test build definitions * adding job to aml_build.yml * updating example config for testing * modifying the test_train_pipeline.py to have appropriate number of pipeline steps and other required modifications * updating AML_pipeline_tests in aml_build.yml to consume environment variables * updating scriptType, sciptLocation, and inlineScript in aml_build.yml * trivial commit to re-trigger broken build pipelines * fix to aml yml build to use env vars for secrets and everything else * another yml fix * another yml fix * reverting structure format of jobs for aml_build pipeline tests * updating path to test_train_pipeline.py * aml_pipeline_tests timed out, extending timeoutInMinutes from 10 to 40 * adding additional pytest * adding az login * updating variables in aml pipeline tests Co-authored-by: Anna Zietlow <annamzietlow@gmail.com> Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * moved contrib contributions around from CSE * fixed dataloader tests - updated them to work with new code from staging branch * segyconverter notebooks and tests run and pass; updated documentation * added test job for segy converter notebooks * removed AML training pipeline from this release * fixed training model tolerance precision in the tests - wasn't working * fixed train.py build issues after the merge * addressed PR comments * fixed bug in check_performance Co-authored-by: Sharat Chikkerur <sharat.chikkerur@microsoft.com> Co-authored-by: kirasoderstrom <kirasoderstrom@gmail.com> Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> Co-authored-by: Geisa Faustino <32823639+GeisaFaustino@users.noreply.github.com> Co-authored-by: Ricardo Squassina Lee <8495707+squassina@users.noreply.github.com> Co-authored-by: Michael Zawacki <mikezawacki@hotmail.com> Co-authored-by: Anna Zietlow <annamzietlow@gmail.com>

view details

Fatemeh

commit sha aba1beaebbf44f66989b63f69ffbcb745b98546e

conflict resolution

view details

push time in a month

Pull request review commentmicrosoft/seismic-deeplearning

make tests simpler

 jobs:        # check validation set performance       set -e-      python ../../../../tests/cicd/src/check_performance.py --infile metrics_patch_deconvnet_no_depth.json+      # python ../../../../tests/cicd/src/check_performance.py --infile metrics_patch_deconvnet_no_depth.json

same, we will enable this after investigation!

fazamani

comment created time in a month

Pull request review commentmicrosoft/seismic-deeplearning

make tests simpler

 jobs:        # check test set performance       set -e-      python ../../../../tests/cicd/src/check_performance.py --infile metrics_test_patch_deconvnet_no_depth.json --test+      # python ../../../../tests/cicd/src/check_performance.py --infile metrics_test_patch_deconvnet_no_depth.json --test

as discussed, I added a "TODO" to enable this after investigation!

fazamani

comment created time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha fb924c9fb5c3e6bd726d0289deb62c5f90d0ca19

added comment to enable tests for patch_deconvnet after debugging, renamed gen_checkerboard, added options to new arg per Max's suggestion

view details

push time in a month

Pull request review commentmicrosoft/seismic-deeplearning

make tests simpler

 def main(args): parser.add_argument("--dataroot", help="Root location of the input data", type=str, required=True) parser.add_argument("--dataout", help="Root location of the output data", type=str, required=True) parser.add_argument("--box_size", help="Size of the bounding box", type=int, required=False, default=100)+parser.add_argument("--based_on", help="This determines the shape of synthetic data array", type=str, required=False, default='dutch_f3')

@maxkazmsft are you suggesting to change argument "based_on" to "choices"?

fazamani

comment created time in a month

Pull request review commentmicrosoft/seismic-deeplearning

make tests simpler

 jobs:        # check validation set performance       set -e-      python ../../../../tests/cicd/src/check_performance.py --infile metrics_patch_deconvnet_no_depth.json+      # python ../../../../tests/cicd/src/check_performance.py --infile metrics_patch_deconvnet_no_depth.json

Thanks @maxkazmsft , I didn't mean to disable this permanently, this is the reproducibility problem that we are investigating! Will enable when we have a resolution!

fazamani

comment created time in a month

issue openedmicrosoft/seismic-deeplearning

Dynamic global data information, reduces user-required specifications

Add a step to data preprocessing module to get "global min and max" and "number of classes" directly from data and update config files based on this.

created time in a month

PR opened microsoft/seismic-deeplearning

make tests simpler

This closes #346

performance checks for patch_deconvnet is disabled (investigating a performance issue)

+89 -199

0 comment

5 changed files

pr created time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 3fd2fca6f01d2790ae825d62a3002a5f3e13a351

fixed a bug in data subset in debug mode

view details

Fatemeh

commit sha 6b836b66d831f1841acf3f8496d8368e0d4634fe

modified epoch numbers to pass the performance checks, checkedout check_performance from Max's branch

view details

Fatemeh

commit sha 52a00985daa0df1ab84a27170c9d9299139cb0de

modified get_data_for_builds.sh to set up checkerboard data for smaller size, minor improvements on gen_checkerboard

view details

Fatemeh

commit sha 0aee910f13fc6d6ede8e21f057510763bc44f164

send all the batches, disabled the performance checks for patch_deconvnet

view details

push time in a month

issue commentmicrosoft/seismic-deeplearning

add correctness data and class size unit test metrics - facilitates certainty in correct data flow

add a test to check the bounds (limits) on the data

maxkazmsft

comment created time in a month

create barnchmicrosoft/seismic-deeplearning

branch : simplify_tests

created branch time in a month

push eventmicrosoft/AML_Interpret_usecases

Fatemeh Zamanian

commit sha 72d806de083eb56bf4462c0906820f0266b55716

slide

view details

Fatemeh Zamanian

commit sha ccd48f6d4bc8316c011e8fe3e774b82a2434e931

Merge branch 'master' of github.com:microsoft/AML_Interpret_usecases

view details

push time in a month

delete branch microsoft/seismic-deeplearning

delete branch : image-normalization

delete time in a month

push eventmicrosoft/AML_Interpret_usecases

Fatemeh Zamanian

commit sha 4ac49591ce77bbd29cfdb5cedf6d4976385dc95a

added slides

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha fdd0dafa5db81bdbc829743b0475f2334febb09f

PR to fix #342 (#347) * intermediate work for normalization * 1) normalize function runs based on global MIN and MAX 2) has a error handling for division by zero, np.finfo 3) decode_segmap normalizes the label/mask based on the n_calsses * global normalization added to test.py * increasing the threshold on timeout * trigger * revert * idk what happened * increase timeout * picking up global min and max * passing config to TrainPatchLoader to facilitate access to global min and max and other attr in low level functions, WIP * removed print statement * changed section loaders * updated test for min and max from config too * adde MIN and MAX to config * notebook modified for loaders * another dataloader in notebook * readme update * changed the default values for min max, updated the docstring for loaders, removed suppressed lines * debug

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha c92ce78796d8e3d7818ec269f9460eecf3b231c6

readme update

view details

Fatemeh

commit sha 6140e235142318025a091d477e93f71bfa01c9d7

changed the default values for min max, updated the docstring for loaders, removed suppressed lines

view details

Fatemeh

commit sha ab184d87a46ed279307d15289e4e51002106a4e5

debug

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha ea9f319533bad2b6f432190cad3c4b8f365f267f

removed print statement

view details

Fatemeh

commit sha 7818b733acf8d800a38f35223da0c386882ca6aa

changed section loaders

view details

Fatemeh

commit sha 18f4e02abb876e59a21ba171220cb6e521131239

updated test for min and max from config too

view details

Fatemeh

commit sha 380f217e08a978b530eeed94e69f75fd467592db

adde MIN and MAX to config

view details

Fatemeh

commit sha 4f3b5032b8f1222ac1b8b72d5f28f917dd39a837

notebook modified for loaders

view details

Fatemeh

commit sha ff797b79201823e7a1a5468e18716fc337b29704

another dataloader in notebook

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha b015317720838392054a5db2e57fe2bc5edb54c3

picking up global min and max

view details

Fatemeh

commit sha 39a861a670dc82a2907644b47e8dc6d4c0ffedf0

passing config to TrainPatchLoader to facilitate access to global min and max and other attr in low level functions, WIP

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 627afe86ea845c895cab7bb89a217f14bbe8c386

increase timeout

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 1b5b27d23c3568b9515c165e96d4daa81a83b792

replace hrnet with seresnet in experiments - provides stable default model (#343)

view details

Fatemeh

commit sha a9b0aa9618e667edf7128283b657c2baaf713940

Merge branch 'staging' into image-normalization

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 18d66cb59e0b00957dc5295ef650339eaeabd1c9

idk what happened

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 0a050b1f44b2c010c48c537cef19a9143a848949

revert

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha f699e996b14bbf9e3cd321e0b9af8f454bd305bd

trigger

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 684e225ac1c08405e955b97756f870f49ba51b48

increasing the threshold on timeout

view details

push time in a month

issue commentmicrosoft/seismic-deeplearning

image normalization should be wrt the volume image - facilitates correctly displayed image colormap range

also addressed within this issue is the following: mask/label in the function below should be normalized wrt n_classes:

image

fazamani

comment created time in a month

Pull request review commentmicrosoft/seismic-deeplearning

PR to fix #342

 def _patch_label_2d(                 path_prefix = f"{outdir}/{batch_indexes[i][0]}_{batch_indexes[i][1]}"                 model_output = model_output.detach().cpu()                 # save image:-                image_to_disk(np.array(batch[i, 0, :, :]), path_prefix + "_img.png")+                image_to_disk(np.array(batch[i, 0, :, :]), path_prefix + "_img.png", float(img.min()), float(img.max()))

do you mean now, or in the next attempt to address the "global MIN and MAX" next sprint? Currently we don't have MIN and MAX defined in the cofing, I reverted those changes!

fazamani

comment created time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 4986bec5d319c33baea3949ff2378ad371da0a15

bug fix to model predictions (#345)

view details

Fatemeh

commit sha c67a3229e7f6f2f76f10854dc1b8ba3e5998d217

Merge branch 'staging' into image-normalization

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 6b1ebdfbf3a55ea5b73d62efabbdd76f317fdb6f

global normalization added to test.py

view details

push time in a month

push eventmicrosoft/AML_Interpret_usecases

Fatemeh Zamanian

commit sha 595b10266af480d8f2cb8b2e1091624e1adfb11f

'same'

view details

push time in a month

push eventmicrosoft/AML_Interpret_usecases

Fatemeh Zamanian

commit sha d44998a690a1a53f76a563da6977e17d1281c0de

moving files to this repo

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha cd575ff66dde8681bd5149848f81626606e88886

1) normalize function runs based on global MIN and MAX 2) has a error handling for division by zero, np.finfo 3) decode_segmap normalizes the label/mask based on the n_calsses

view details

push time in a month

create barnchmicrosoft/seismic-deeplearning

branch : image-normalization

created branch time in a month

delete branch microsoft/seismic-deeplearning

delete branch : gradient-dataset-bug

delete time in a month

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha 55bad56dad9cca915f0014553941ea684a352346

fix 333 (#340) * added label correction to train gradient * changing the gradient data generator to take inline/crossline argument conssistent with the patchloader * changing variable name to be more descriptive Co-authored-by: maxkazmsft <maxkaz@microsoft.com>

view details

push time in a month

issue openedmicrosoft/seismic-deeplearning

image normalization should be wrt the volume image

The way we normalize the image before saving to disk,

image

is relative to each image, which causes the following

image

labels are changing, but images are "mapped" to the same color! This might be only relevant to data, in which all pixel values in each segmented image are taking two unique values, but overall in the volume we have more unique values for the pixels.

Hence, the normalization should be wrt a global range in the volume of data.

created time in a month

issue openedmicrosoft/seismic-deeplearning

image normalization should be wrt the volume image

The way we normalize the image before saving to disk,

image

is relative to each image, which causes the following

image

labels are changing, but images are "mapped" to the same color! This might be only relevant to data, in which all pixel values in each segmented image are taking two unique values, but overall in the volume we have more unique values for the pixels.

Hence, the normalization should be wrt a global range in the volume of data.

created time in a month

more