profile
viewpoint

yalaudah/facies_classification_benchmark 46

The repository includes PyTorch code, and the data, to reproduce the results for our paper titled "A Machine Learning Benchmark for Facies Classification" (published in the SEG Interpretation Journal, August 2019).

olgaliak/seismic-deeplearning 2

Deep Learning for Seismic Imaging and Interpretation

yalaudah/clever_gtc_coherence 1

Implementing GTC coherence attribute for Clever Azure Platform

yalaudah/CtCI-6th-Edition-cpp 1

Cracking the Coding Interview 6th Ed. C++ Solutions

yalaudah/FCN-pytorch 1

🚘 The easiest implementation of fully convolutional networks

yalaudah/Python-for-Signal-Processing 1

Notebooks for "Python for Signal Processing" book

olivesgatech/directional-coherence 0

Code and data to for the paper "A Directional Coherence Attribute for Seismic Interpretation" published at SEG 2017.

yalaudah/DAT4 0

General Assembly's Data Science course in Washington, DC

yalaudah/DeepSpeed 0

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

yalaudah/dpctex 0

Assorted TeX packages

issue closedmicrosoft/seismic-deeplearning

prepare_dutchf3.py vertical sample locations drops patches at the bottom of the volume - leads to worse results in that region

In the prepare_dutchf3.py script, the vertical locations are computed in a way such that the bottom of the volume is not sampled. This is because the code doesn't use any padding around the volume, and therefore, unless the depth of the volume is an integer multiple of patch_size, the bottom part of the volume would not be sampled. This significantly affects the results for deeper classes (Zechstein, Scruff, Chalk).

image

Here's the line of code that needs to be fixed (either padding the volume, or manually adding patches from the bottom of the volume). https://github.com/microsoft/seismic-deeplearning/blob/79d19710dac4fccc42e6fbfbe9b85c7157224782/scripts/prepare_dutchf3.py#L149

Also, this is also a problem for horizontal patch location, but to a lesser degree, since the data doesn't change much in the horizontal direction.

closed time in 2 days

yalaudah

delete branch microsoft/seismic-deeplearning

delete branch : add-uniform-padding

delete time in 2 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 21262504236ac4d9ca875c25ab419ac8d51effdb

Fixing #259 by adding symmetric padding along depth direction (#386)

view details

push time in 2 days

push eventmicrosoft/seismic-deeplearning

maxkazmsft

commit sha eee7dd2ee987cf1bf19d5a813165ab2d831eacfd

closes 385 (#389)

view details

yalaudah

commit sha 6aa4521875afe7b147e388051aed22d74b2cf20d

Merge branch 'staging' into add-uniform-padding

view details

push time in 2 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 4df8fab77e48087c240bcb35350df826b973d6ef

Update interpretation/deepseismic_interpretation/dutchf3/data.py

view details

push time in 3 days

Pull request review commentmicrosoft/seismic-deeplearning

Fixing #259 by adding symmetric padding along depth direction

 class PatchLoader(data.Dataset):     :param bool debug: enable debugging output     """ -    def __init__(-        self, config, is_transform=True, augmentations=None, debug=False,-        ):+    def __init__(self, config, is_transform=True, augmentations=None, debug=False, split="train"):
    def __init__(self, config, split="train", is_transform=True, augmentations=None, debug=False):
yalaudah

comment created time in 3 days

startedmicrosoft/AcademicContent

started time in 3 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha af63a61ad7618f7d96d59fa2c668c93a300f2277

bug fix

view details

push time in 3 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha feb1100d9440060b735d8c1fd019877ec97382b8

bug fix

view details

push time in 3 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 491314c4b465e4598240252cd636d561c3c83ad9

bug fix

view details

push time in 3 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 519eed97486cac425a85bf9d3e6943f025184d50

bug fix

view details

push time in 3 days

push eventmicrosoft/seismic-deeplearning

Fatemeh Zamanian

commit sha 8074a574491eafc1351e8d67c861bdf1de384b07

changed tensorflow pinned version (#387) * changed tensorflow pinned version * trigger build

view details

yalaudah

commit sha 3ee5a9a444ed00dbf0b6288a045e055e192db38c

Merge branch 'staging' into add-uniform-padding

view details

push time in 4 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha a04beb793596536e46973aa8ac812bc17b26e4f4

retrigger build

view details

push time in 4 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 8e8dc49762b3ee452278b7374124b9c9205daa6a

fixes the code to pad only along the depth dimension

view details

push time in 4 days

PR opened microsoft/seismic-deeplearning

initial commit

This PR should close #259

+37 -66

0 comment

3 changed files

pr created time in 5 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 3e709d93600995456c648ee052d7a3ca2c6dc1cc

initial commit

view details

push time in 5 days

create barnchmicrosoft/seismic-deeplearning

branch : add-uniform-padding

created branch time in 5 days

startedMilesCranmer/symbolic_deep_learning

started time in 7 days

startedUniversalDataTool/universal-data-tool

started time in 8 days

fork yalaudah/rich

Rich is a Python library for rich text and beautiful formatting in the terminal.

https://rich.readthedocs.io/en/latest/

fork in 8 days

startedwillmcgugan/rich

started time in 8 days

startedgoogle/caliban

started time in 12 days

startedmicrosoft/ALEX

started time in 14 days

delete branch microsoft/seismic-deeplearning

delete branch : multi-gpu-training

delete time in 15 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 4f35e8d1d7ea3a5d6e329db82a9cbf248b828770

Multi-GPU training support (#359)

view details

push time in 15 days

PR merged microsoft/seismic-deeplearning

Reviewers
Multi-GPU training support Type: Feature multi-GPU

Adds support for multi GPU training. Once merged, this PR would close #320.

+257 -197

0 comment

30 changed files

yalaudah

pr closed time in 15 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 33c6d33fb15fa5c1df41d8aceff640aebce3157f

bug fix

view details

push time in 15 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha f8107996f83a56422aba968fd2d612c24627b5ec

fixing various paths that were broken after the merge

view details

push time in 15 days

push eventmicrosoft/seismic-deeplearning

maxkazmsft

commit sha dd6244fd9880ac38ac134d46126a96b1b8c0f6e3

merging work from CSE team into main staging branch (#357) * Adding content to interpretation README (#171) * added sharat, weehyong to authors * adding a download script for Dutch F3 dataset * Adding script instructions for dutch f3 * Update README.md prepare scripts expect root level directory for dutch f3 dataset. (it is downloaded into $dir/data by the script) * Adding readme text for the notebooks and checking if config is correctly setup * fixing prepare script example * Adding more content to interpretation README * Update README.md * Update HRNet_Penobscot_demo_notebook.ipynb Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * Updates to prepare dutchf3 (#185) * updating patch to patch_size when we are using it as an integer * modifying the range function in the prepare_dutchf3 script to get all of our data * updating path to logging.config so the script can locate it * manually reverting back log path to troubleshoot build tests * updating patch to patch_size for testing on preprocessing scripts * updating patch to patch_size where applicable in ablation.sh * reverting back changes on ablation.sh to validate build pass * update patch to patch_size in ablation.sh (#191) Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> * TestLoader's support for custom paths (#196) * Add testloader support for custom paths. * Add test * added file name workaround for Train*Loader classes * adding comments and clean up * Remove legacy code. * Remove parameters that dont exist in init() from documentation. * Add unit tests for data loaders in dutchf3 * moved unit tests Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * select contiguous data splits for val and train (#200) * select contiguous data splits for test and train * changed data-dir to data_dir as arg to prepare_dutchf3.py * update script with new required parameter label_file * ignoring split_alaudah_et_al_19 as it is not updated * changed TEST to VALIDATION for clarity in the code * included job to run scripts unit test * Fix val/train split and add tests * adjust to consider the whole horz_lines * update environment - gitpython version * Segy Converter Utility (#199) * Add convert_segy utility script and related notebooks * add segy files to .gitignore * readability update * Create methods for normalizing and clipping separately. * Add comment * update file paths * cleanup tests and terminology for the normalization/clipping code * update notes to provide more context for using the script * Add tests for clipping. * Update comments * added Microsoft copyright * Update root README * Add a flag to turn on clipping in dataprep script. * Remove hard coded values and fix _filder_data method. * Fix some minor issues pointed out on comments. * Remove unused lib. * Rename notebooks to impose order; set env; move all def funtions into utils; improve comments in notebooks; and include code example to run prepare_dutchf3.py * Label missing data with 255. * Remove cell with --help command. * Add notebooks to test pipeline. * grammer edits * update notebook output and utils naming * fix output dir error and cleanup notebook * fix yaml indent error in notebooks_build.yml * fix merge issues and job name errors * debugging the build pipeline * combine notebook tests for segy converter since they are dependent on each other Co-authored-by: Geisa Faustino <32823639+GeisaFaustino@users.noreply.github.com> * Azureml train pipeline (#195) * initial add of azure ml pipeline * update references and dependencies * fix integration tests * remove incomplete tests * add azureml requirements.txt for dutchf3 local patch and update pipeline config * add empty __init__.py to cv_lib dutchf3 * Get train,py to run in pipeline * allow output dir in train.py * Clean up README and __init__ * only pass output if available and use input dir for output in train.py * update comment in train.py * updating azureml_requirements to only pull from /master * removing windows guidance in azureml_pipelines/README.md * adding .env.example * adding azureml config example * updating documentation in azureml_pipelines README.md * updating main README.md to refer to AML guidance documentation * updating AML README.md to include additional guidance to cancel runs * adding documentation on AzureML pipelines in the AML README.me * adding files needed section for AML training run * including hyperlink in format poiniting to additional detail on Azure Machine Learning pipeslines in AML README.md * removing the mention of VSCode in the AML README.md * fixing typo * modifying config to pipeline configuration in README.md * fixing typo in README.md * adding documentation on how to create a blob container and copy data onto it * adding documentation on blob storage guidance * adding guidance on how to get the subscription id * adding guidance to activate environment and then run the kick off train pipeline from ROOT * adding ability to pass in experiement name and different pipeline configuration to kickoff_train_pipeline.py * adding Microsoft Corporation Copyright to kickoff_train_pipeline.py * fixing format in README.md * adding trouble shooting section in README.md for connection to subscription * updating troubleshooting title * adding guidance on how to download the config.json from the Azure Portal in the README.md * adding additional guidance and information on AzureML compute targets and naming conventions * changing the configuation file example to only include the train step that is currently supported * updating config to pipeline configuration when applicable * adding link to Microsoft docs for additional information on pipeline steps * updated AML test build definitions * updated AML test build definitions * adding job to aml_build.yml * updating example config for testing * modifying the test_train_pipeline.py to have appropriate number of pipeline steps and other required modifications * updating AML_pipeline_tests in aml_build.yml to consume environment variables * updating scriptType, sciptLocation, and inlineScript in aml_build.yml * trivial commit to re-trigger broken build pipelines * fix to aml yml build to use env vars for secrets and everything else * another yml fix * another yml fix * reverting structure format of jobs for aml_build pipeline tests * updating path to test_train_pipeline.py * aml_pipeline_tests timed out, extending timeoutInMinutes from 10 to 40 * adding additional pytest * adding az login * updating variables in aml pipeline tests Co-authored-by: Anna Zietlow <annamzietlow@gmail.com> Co-authored-by: maxkazmsft <maxkaz@microsoft.com> * moved contrib contributions around from CSE * fixed dataloader tests - updated them to work with new code from staging branch * segyconverter notebooks and tests run and pass; updated documentation * added test job for segy converter notebooks * removed AML training pipeline from this release * fixed training model tolerance precision in the tests - wasn't working * fixed train.py build issues after the merge * addressed PR comments * fixed bug in check_performance Co-authored-by: Sharat Chikkerur <sharat.chikkerur@microsoft.com> Co-authored-by: kirasoderstrom <kirasoderstrom@gmail.com> Co-authored-by: Sharat Chikkerur <sharat.chikkerur@gmail.com> Co-authored-by: Geisa Faustino <32823639+GeisaFaustino@users.noreply.github.com> Co-authored-by: Ricardo Squassina Lee <8495707+squassina@users.noreply.github.com> Co-authored-by: Michael Zawacki <mikezawacki@hotmail.com> Co-authored-by: Anna Zietlow <annamzietlow@gmail.com>

view details

Fatemeh

commit sha d8092d034a2ff7ffc6a4a3e718c0b2c5b8884c5e

make tests simpler (#368) * removed Dutch F3 job from main_build * fixed a bug in data subset in debug mode * modified epoch numbers to pass the performance checks, checkedout check_performance from Max's branch * modified get_data_for_builds.sh to set up checkerboard data for smaller size, minor improvements on gen_checkerboard * send all the batches, disabled the performance checks for patch_deconvnet * added comment to enable tests for patch_deconvnet after debugging, renamed gen_checkerboard, added options to new arg per Max's suggestion

view details

maxkazmsft

commit sha 5318b332ed33b35e5897ee37312c9d07882d2ff9

Replace HRNet with SEResNet model in the notebook (#362) * replaced HRNet with SEResNet model in the notebook * removed debugging cell info * fixed bug where resnet_unet model wasn't loading the pre-trained version in the notebook * fixed build VM problems

view details

yalaudah

commit sha 79829d2b00fa8f2122549b41b93fc21f8ba26806

Merge branch 'staging' into multi-gpu-training

view details

push time in 15 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha d04b34e279f1b3c89a3317ef1205114f6cacbef1

adding NCCL installation instructions

view details

push time in 15 days

startedNVIDIA/nccl

started time in 15 days

startedgithub/super-linter

started time in 15 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 01ed7de304f5bc1913eedb34ff5bd5ef560ab26a

remove debug flag

view details

yalaudah

commit sha 33130d59dff847e5144846d2711665a8e880741b

Fixes to train.py

view details

push time in 15 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 04d3097e4bbee3303fd8bda8b46d626eccd8326c

fixes

view details

push time in 16 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 36a32ce0645276903d4592bb3b413a389cbde1cf

minor improvement

view details

yalaudah

commit sha 9b3fa07feb1e0f696369e66a05c7b35121b6f819

minor

view details

push time in 16 days

Pull request review commentmicrosoft/seismic-deeplearning

Replace HRNet with SEResNet model in the notebook

 def download_pretrained_model(config):         raise NameError(             "Unknown dataset name. Only dutch f3 and penobscot are currently supported."         )-+         if "hrnet" in config.MODEL.NAME:         model = "hrnet"     elif "deconvnet" in config.MODEL.NAME:         model = "deconvnet"-    elif "unet" in config.MODEL.NAME:-        model = "unet"+    elif "resnet" in config.MODEL.NAME:

shouldn't this be "seresnet" or "resnet_unet" ? we might have other models that have a resent backbone

maxkazmsft

comment created time in 16 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 089fbc86d04a42e41bc7916cc21dab1d679d5949

various fixes to train.py

view details

push time in 17 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha d857e1ce3734509156b79ccd7a0619b14920bc0e

improvements to the scripts

view details

push time in 17 days

Pull request review commentmicrosoft/seismic-deeplearning

make tests simpler

 python prepare_dutchf3.py split_train_val patch   --data_dir=${DATA_F3} --label_ DATA_CHECKERBOARD="${DATA_CHECKERBOARD}/data" # repeat for checkerboard dataset python prepare_dutchf3.py split_train_val section --data_dir=${DATA_CHECKERBOARD} --label_file=train/train_labels.npy --output_dir=splits --split_direction=both-python prepare_dutchf3.py split_train_val patch   --data_dir=${DATA_CHECKERBOARD} --label_file=train/train_labels.npy --output_dir=splits --stride=50 --patch_size=100 --split_direction=both+python prepare_dutchf3.py split_train_val patch   --data_dir=${DATA_CHECKERBOARD} --label_file=train/train_labels.npy --output_dir=splits --stride=50 --patch_size=100 --split_direction=both --section_stride=100

The section stride is very big. This might lead to a really small training set.. it will make training faster, but if we are testing the performance of this model, it might note perform well at all.

fazamani

comment created time in 19 days

Pull request review commentmicrosoft/seismic-deeplearning

make tests simpler

 jobs:       echo "PASSED"  -###################################################################################################-# Stage 3: Dutch F3 patch models: deconvnet, unet, HRNet patch depth, HRNet section depth-# CAUTION: reverted these builds to single-GPU leaving new multi-GPU code in to be reverted later-###################################################################################################--- job: dutchf3_patch-  dependsOn: checkerboard_dutchf3_patch-  timeoutInMinutes: 60-  displayName: Dutch F3 patch local-  pool:-    name: deepseismicagentpool-  steps:-  - bash: |--      source activate seismic-interpretation--      # disable auto error handling as we flag it manually-      set +e--      cd experiments/interpretation/dutchf3_patch/local--      # Create a temporary directory to store the statuses-      dir=$(mktemp -d)--      pids=-      # export CUDA_VISIBLE_DEVICES=0-      { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \-                        'TRAIN.DEPTH' 'none' \-                        'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \-                        'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'no_depth' \-                        'WORKERS' 1 \-                        --cfg=configs/patch_deconvnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=1-      { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \-                        'TRAIN.DEPTH' 'section' \-                        'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \-                        'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \-                        'WORKERS' 1 \-                        --cfg=configs/unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=2-      { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \-                        'TRAIN.DEPTH' 'section' \-                        'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \-                        'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \-                        'WORKERS' 1 \-                        --cfg=configs/seresnet_unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=3-      { python train.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' 'TRAIN.END_EPOCH' 1 'TRAIN.SNAPSHOTS' 1 \-                        'TRAIN.DEPTH' 'section' \-                        'TRAIN.BATCH_SIZE_PER_GPU' 2 'VALIDATION.BATCH_SIZE_PER_GPU' 2 \-                        'MODEL.PRETRAINED' '/home/alfred/models/hrnetv2_w48_imagenet_pretrained.pth' \-                        'OUTPUT_DIR' 'output' 'TRAIN.MODEL_DIR' 'section_depth' \-                        'WORKERS' 1 \-                        --cfg=configs/hrnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"--      wait $pids || exit 1--      # check if any of the models had an error during execution-      # Get return information for each pid-      for file in "$dir"/*; do-        printf 'PID %d returned %d\n' "${file##*/}" "$(<"$file")"-        [[ "$(<"$file")" -ne "0" ]] && exit 1 || echo "pass"-      done--      # Remove the temporary directory-      rm -r "$dir"--      echo "All models finished training - start scoring"--      # Create a temporary directory to store the statuses-      dir=$(mktemp -d)--      pids=-      # export CUDA_VISIBLE_DEVICES=0-      # find the latest model which we just trained-      # if we're running on a build VM-      model_dir=$(ls -td output/patch_deconvnet/no_depth/* | head -1)-      # if we're running in a checked out git repo-      [[ -z ${model_dir} ]] && model_dir=$(ls -td output/$(git rev-parse --abbrev-ref HEAD)/*/patch_deconvnet/no_depth/* | head -1)-      model=$(ls -t ${model_dir}/*.pth | head -1)-      # try running the test script-      { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \-                       'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'no_depth' \-                       'TEST.MODEL_PATH' ${model} \-                       'WORKERS' 1 \-                       --cfg=configs/patch_deconvnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=1-      # find the latest model which we just trained-      # if we're running on a build VM-      model_dir=$(ls -td output/unet/section_depth/* | head -1)-      # if we're running in a checked out git repo-      [[ -z ${model_dir} ]] && model_dir=$(ls -td output/$(git rev-parse --abbrev-ref HEAD)/*/unet/section_depth* | head -1)-      model=$(ls -t ${model_dir}/*.pth | head -1)-      # try running the test script-      { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \-                       'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \-                       'TEST.MODEL_PATH' ${model} \-                       'WORKERS' 1 \-                       --cfg=configs/unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=2-      # find the latest model which we just trained-      # if we're running on a build VM-      model_dir=$(ls -td output/seresnet_unet/section_depth/* | head -1)-      # if we're running in a checked out git repo-      [[ -z ${model_dir} ]] && model_dir=$(ls -td output/$(git rev-parse --abbrev-ref HEAD)/*/seresnet_unet/section_depth/* | head -1)-      model=$(ls -t ${model_dir}/*.pth | head -1)-      # try running the test script-      { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \-                       'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \-                       'TEST.MODEL_PATH' ${model} \-                       'WORKERS' 1 \-                       --cfg=configs/seresnet_unet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"-      # export CUDA_VISIBLE_DEVICES=3-      # find the latest model which we just trained-      # if we're running on a build VM-      model_dir=$(ls -td output/hrnet/section_depth/* | head -1)-      # if we're running in a checked out git repo-      [[ -z ${model_dir} ]] && model_dir=$(ls -td output/$(git rev-parse --abbrev-ref HEAD)/*/hrnet/section_depth/* | head -1)-      model=$(ls -t ${model_dir}/*.pth | head -1)-      # try running the test script-      { python test.py 'DATASET.ROOT' '/home/alfred/data_dynamic/dutch_f3/data' \-                       'TEST.SPLIT' 'Both' 'TRAIN.MODEL_DIR' 'section_depth' \-                       'MODEL.PRETRAINED' '/home/alfred/models/hrnetv2_w48_imagenet_pretrained.pth' \-                       'TEST.MODEL_PATH' ${model} \-                       'WORKERS' 1 \-                       --cfg=configs/hrnet.yaml --debug ; echo "$?" > "$dir/$BASHPID"; }-      pids+=" $!"--      # wait for completion-      wait $pids || exit 1--      # check if any of the models had an error during execution-      # Get return information for each pid-      for file in "$dir"/*; do-        printf 'PID %d returned %d\n' "${file##*/}" "$(<"$file")"-        [[ "$(<"$file")" -ne "0" ]] && exit 1 || echo "pass"-      done--      # Remove the temporary directory-      rm -r "$dir"--      echo "PASSED"

Sorry I might be missing something obvious, but why are thes tests removed?

fazamani

comment created time in 19 days

Pull request review commentmicrosoft/seismic-deeplearning

merging work from CSE team into main staging branch

+git+https://github.com/microsoft/seismic-deeplearning.git@contrib#egg=cv_lib&subdirectory=cv_lib+git+https://github.com/microsoft/seismic-deeplearning.git#egg=deepseismic-interpretation&subdirectory=interpretation+opencv-python==4.1.2.30+numpy>=1.17.0+torch==1.4.0+pytorch-ignite==0.3.0.dev20191105 # pre-release until stable available+fire==0.2.1+albumentations==0.4.3+toolz==0.10.0+segyio==1.8.8+scipy==1.1.0+gitpython==3.0.5+yacs==0.1.6

Why do we need a seperate requirements file from AML? And should we update Ignite version here to 0.3.0?

maxkazmsft

comment created time in 22 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 462e8d9ae1098639b4c647083bceb726e17724c6

remove gpustat

view details

push time in 22 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 2f9f47dc2ec25511fd78c5c813b1443e20147e88

Updating scripts and moving them to contrib

view details

yalaudah

commit sha 1fd4fd2524d8f55828ea1bd9525bc4a235b5d2f1

minor updates to readme files

view details

yalaudah

commit sha 6c14cf9b26a4c6c890a7022ad65f42bd57917167

add gpustat to env

view details

push time in 22 days

startedwookayin/gpustat

started time in 22 days

startedlensapp/lens

started time in 22 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 4b7625189bb5024ea23f6b4653f2415a6c64c470

updaying the readmes

view details

yalaudah

commit sha de860a3cb51379ec3c3cdb9f1b9fe104cf8f23df

bug fix in build

view details

push time in 22 days

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha eb45f1ba006dca426b39f4fa237543c22b5acf5e

updating paths in notebook and main build

view details

push time in 22 days

push eventmicrosoft/seismic-deeplearning

Fatemeh

commit sha fdd0dafa5db81bdbc829743b0475f2334febb09f

PR to fix #342 (#347) * intermediate work for normalization * 1) normalize function runs based on global MIN and MAX 2) has a error handling for division by zero, np.finfo 3) decode_segmap normalizes the label/mask based on the n_calsses * global normalization added to test.py * increasing the threshold on timeout * trigger * revert * idk what happened * increase timeout * picking up global min and max * passing config to TrainPatchLoader to facilitate access to global min and max and other attr in low level functions, WIP * removed print statement * changed section loaders * updated test for min and max from config too * adde MIN and MAX to config * notebook modified for loaders * another dataloader in notebook * readme update * changed the default values for min max, updated the docstring for loaders, removed suppressed lines * debug

view details

yalaudah

commit sha cc985f2fc90d100ff845f7aba0e07314118f9b94

Merge branch 'staging' into multi-gpu-training

view details

push time in 23 days

PR opened microsoft/seismic-deeplearning

Multi-GPU training support

Adds support for multi GPU training. This is a draft PR. Not ready to be merged yet.

Once merged, this PR would close #320.

+119 -59

0 comment

13 changed files

pr created time in 23 days

create barnchmicrosoft/seismic-deeplearning

branch : multi-gpu-training

created branch time in 23 days

issue openedmicrosoft/seismic-deeplearning

Add docker tests to ensure it doesn't break while new features are added

Add a new job (in parrallel with the setup job) to build and run the docker image. We can do this this for both local and distributed train, in addition to running the notebooks and testing a pretrained model.

created time in 23 days

Pull request review commentmicrosoft/seismic-deeplearning

PR to fix #342

 DATASET:   NUM_CLASSES: 6   ROOT: '/home/username/data/dutch/data'   CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852]+  MIN: 0+  MAX: 7

same as above

fazamani

comment created time in 24 days

Pull request review commentmicrosoft/seismic-deeplearning

PR to fix #342

 class TrainPatchLoader(PatchLoader):      def __init__(         self,-        data_dir,-        n_classes,+        config,         split="train",-        stride=30,-        patch_size=99,+        # stride=30,+        # patch_size=99,         is_transform=True,         augmentations=None,         seismic_path=None,         label_path=None,         debug=False,     ):         super(TrainPatchLoader, self).__init__(-            data_dir,-            n_classes,-            stride=stride,-            patch_size=patch_size,+            config,+            # stride=stride,+            # patch_size=patch_size,

Same comment as above: I suggest we delete code we don't need instead of having it commented out.

fazamani

comment created time in 24 days

Pull request review commentmicrosoft/seismic-deeplearning

PR to fix #342

 DATASET:   NUM_CLASSES: 6   ROOT: "/home/username/data/dutch/data"   CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852]+  MIN: 0+  MAX: 7

are these values correct? shouldn't they be -1 to 1 or something like this?

fazamani

comment created time in 24 days

Pull request review commentmicrosoft/seismic-deeplearning

PR to fix #342

 class TestPatchLoader(PatchLoader):     """      def __init__(-        self, data_dir, n_classes, stride=30, patch_size=99, is_transform=True, augmentations=None, debug=False-    ):+        self, config, is_transform=True, augmentations=None, debug=False+    ): #stride=30, patch_size=99

I suggest we delete code we don't need instead of having it commented out.

fazamani

comment created time in 24 days

Pull request review commentmicrosoft/seismic-deeplearning

PR to fix #342

 DATASET:   NUM_CLASSES: 6   ROOT: /home/username/data/dutch/data   CLASS_WEIGHTS: [0.7151, 0.8811, 0.5156, 0.9346, 0.9683, 0.9852]+  MIN: 0

same as above

fazamani

comment created time in 24 days

startedmicrosoft/hummingbird

started time in 25 days

issue openedmicrosoft/seismic-deeplearning

Remove depth channel from Tensorboard visualization

Currently, models that use depth in the second (Green) channel have that depth channel also visualized in Tensorboard. For better visualization, we should only show the seismic data in Tensorboard.

This what a depth-augmented image currently looks like in Tensorboard: image

created time in a month

delete branch microsoft/seismic-deeplearning

delete branch : make-seresnet-default

delete time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 1b5b27d23c3568b9515c165e96d4daa81a83b792

replace hrnet with seresnet in experiments - provides stable default model (#343)

view details

push time in a month

PR merged microsoft/seismic-deeplearning

Reviewers
replace hrnet with seresnet in experiments - provides stable default model

Replaces HRNet with SEResNet as the default recommended model until we fix issue #304 for HRNet.

+35 -40

2 comments

17 changed files

yalaudah

pr closed time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 232ee04218e7d8596352273fa97e48ba182ee182

bug fix

view details

push time in a month

Pull request review commentmicrosoft/seismic-deeplearning

PR to fix #342

 def _patch_label_2d(                 path_prefix = f"{outdir}/{batch_indexes[i][0]}_{batch_indexes[i][1]}"                 model_output = model_output.detach().cpu()                 # save image:-                image_to_disk(np.array(batch[i, 0, :, :]), path_prefix + "_img.png")+                image_to_disk(np.array(batch[i, 0, :, :]), path_prefix + "_img.png", float(img.min()), float(img.max()))

okay that makes sense.. sorry if I misunderstood.

fazamani

comment created time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha fb42db5a1ba235e472aaf798f00436df48308f93

increase setup job timout time

view details

push time in a month

Pull request review commentmicrosoft/seismic-deeplearning

PR to fix #342

 def _patch_label_2d(                 path_prefix = f"{outdir}/{batch_indexes[i][0]}_{batch_indexes[i][1]}"                 model_output = model_output.detach().cpu()                 # save image:-                image_to_disk(np.array(batch[i, 0, :, :]), path_prefix + "_img.png")+                image_to_disk(np.array(batch[i, 0, :, :]), path_prefix + "_img.png", float(img.min()), float(img.max()))

Is it possible to use the max and min values here from the config file?

fazamani

comment created time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha eab003f024d73ebe785871ee3227ed91ea390198

bug fix

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 4986bec5d319c33baea3949ff2378ad371da0a15

bug fix to model predictions (#345)

view details

yalaudah

commit sha d772be3eb497a8c411186701cca3b911242965a2

Merge branch 'staging' into make-seresnet-default

view details

push time in a month

delete branch microsoft/seismic-deeplearning

delete branch : fix-326

delete time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 4986bec5d319c33baea3949ff2378ad371da0a15

bug fix to model predictions (#345)

view details

push time in a month

PR merged microsoft/seismic-deeplearning

Reviewers
Minor fix to address #326

Issue #326 was already partially addressed in PR #338. This just fixes a minor bug in that PR.

+1 -1

0 comment

1 changed file

yalaudah

pr closed time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha ecf116fd187b12e5f2ca16ba9ce5774ef3295b01

hrnet is used in the notebook for now

view details

yalaudah

commit sha 8274a24b12a6930d2f427c4edffcfbb3c1805d50

removing usernames

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 2e61b203c4ca242e24e4dd3cd358eb201e072360

modifictions to readme

view details

yalaudah

commit sha 8cea76b4d86a2df8407d92c7d4d62fccd4b4c9c5

Fixing inconsistencies in dutch f3 paths

view details

push time in a month

issue commentmicrosoft/seismic-deeplearning

debug multiple classes showing up on binary synthetic data - facilitates correct mask visualizations representative of the number of classes

I remove the other issue (binary image becomes non binary after augmentation) since it is unrelated. It is not referenced in issue: #344

yalaudah

comment created time in a month

push eventmicrosoft/seismic-deeplearning

maxkazmsft

commit sha 986e1ac6c0f704ad395df6d2924a5b36b624a474

fixes 318 (#339) * finished 318 * increased checkerboard test timeout

view details

Fatemeh

commit sha 55bad56dad9cca915f0014553941ea684a352346

fix 333 (#340) * added label correction to train gradient * changing the gradient data generator to take inline/crossline argument conssistent with the patchloader * changing variable name to be more descriptive Co-authored-by: maxkazmsft <maxkaz@microsoft.com>

view details

yalaudah

commit sha 869fc8946e8b55eeeb8335be15bfe89151355794

Merge branch 'staging' into fix-326

view details

push time in a month

PR opened microsoft/seismic-deeplearning

Minor fix to address #326

Issue #326 was already partially addressed in PR #338. This just fixes a minor bug in that PR.

+463 -93

0 comment

16 changed files

pr created time in a month

create barnchmicrosoft/seismic-deeplearning

branch : fix-326

created branch time in a month

issue closedmicrosoft/seismic-deeplearning

Images from the binary dataset result in multiple classes after augmentation

Investigate why the binary dataset can have multiple classes around the boundary (probably from interpolation), resulting in an image with 4 classes, instead of 2. This is clear in the example images below: image image

closed time in a month

yalaudah

issue commentmicrosoft/seismic-deeplearning

Images from the binary dataset result in multiple classes after augmentation

This is an issue related to using linear interpolation for the images. This is a non-issue in my mind, and we should not change the code to "fix" it.

yalaudah

comment created time in a month

issue openedmicrosoft/seismic-deeplearning

Images from the binary dataset result in multiple classes after augmentation

Investigate why the binary dataset can have multiple classes around the boundary (probably from interpolation), resulting in an image with 4 classes, instead of 2. This is clear in the example images below: image image

created time in a month

issue commentmicrosoft/seismic-deeplearning

image normalization should be wrt the volume image

Also, this issue should address the problem where the normalized image is a constant image resulting in diving by zero, and saving NaNs to disk. We should:

  • [ ] Only use the dataset min and max to normalize
  • [ ] Assert that the min and max are not equal, otherwise we would divide by zero
fazamani

comment created time in a month

PR opened microsoft/seismic-deeplearning

replace hrnet with seresnet

Replaces HRNet with SEResNet as the default recommended model until we fix issue #304 for HRNet.

+18 -21

0 comment

9 changed files

pr created time in a month

create barnchmicrosoft/seismic-deeplearning

branch : make-seresnet-default

created branch time in a month

startedso-fancy/diff-so-fancy

started time in a month

Pull request review commentmicrosoft/seismic-deeplearning

fix 333

 def make_gradient(n_inlines, n_crosslines, n_depth, box_size, dir="inline"):     :return: numpy array     """ -    axis = GRADIENT_DIR.index(dir)+    _dir = dir # for depth case+    if dir=='inline':+        _dir = 'crossline'+    elif dir=='crossline':+        _dir = 'inline'+    +    axis = GRADIENT_DIR.index(_dir)

I know we just discussed this issue, but it might be helpful to use a more descriptive variable name instead of _dir to help users better understand its role. Maybe something like orthogonal_dir ?

fazamani

comment created time in a month

delete branch microsoft/seismic-deeplearning

delete branch : 324-and-325

delete time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 59efd5c947a89aea25b90f4508256d107c14314f

Fixing #324 and #325 (#338) * update colormap to a non-discrete one -- fixes #324 * fix mask_to_disk to normalize by n_classes * changes to test.py * Updating data.py * bug fix * increased timeout time for main_build * retrigger build * retrigger the build * increase timeout

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 9d9a4af31c7393c7de530e54f81251db4a12ec0e

increase timeout

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha b697bd3989e6792f5867997b7cb04c48230ef465

retrigger the build

view details

push time in a month

IssuesEvent

issue closedmicrosoft/seismic-deeplearning

debug test.py returns zero output - determine why deconvnet and unet model snapshots produce zero output (same class predictions)

Possible culprits:

  • Saving models in Ignite? Serialization issues?
  • Possible bug in test.py?

Update: the model is definitely trained correctly, and the issues does not seem to be related to the serialization. Its definitely in the test.py code. I've ran it with an Imagenet pretrained model and it still returned zeros.

Rescoping to just Unet - make sure this works. Table HRNet for later.

The ask is to either repro with UNet and fix or label as not reproducible and close the work item.

closed time in a month

yalaudah

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 2cce0cc38838ac44d26031bfbe68ab78e61b8eea

retrigger build

view details

push time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha 540507d383586e575b3d419b70b50f6cb2d0914a

increased timeout time for main_build

view details

push time in a month

delete branch microsoft/seismic-deeplearning

delete branch : fix-324-and-325

delete time in a month

PullRequestEvent

create barnchmicrosoft/seismic-deeplearning

branch : fix-324-and-325

created branch time in a month

PR closed microsoft/seismic-deeplearning

Reviewers
Fixing #324 and #325

This PR closes issues #324 and #325.

  • Now, the colormap can handle an arbitrary number of classes
  • mask_to_disk() no longer normalizes the mask by its min and max, and rather normalizes by the absolute number of classes. Here's a sample output (with the new colormap): image
+461 -91

0 comment

16 changed files

yalaudah

pr closed time in a month

PR opened microsoft/seismic-deeplearning

Reviewers
Fixing #324 and #325

This PR closes issues #324 and #325.

  • Now, the colormap can handle an arbitrary number of classes
  • mask_to_disk() no longer normalizes the mask by its min and max, and rather normalizes by the absolute number of classes. Here's a sample output (with the new colormap): image
+461 -91

0 comment

16 changed files

pr created time in a month

push eventmicrosoft/seismic-deeplearning

yalaudah

commit sha a6eed42693535debcffa6c00d4addbc309015812

bug fix

view details

push time in a month

startedvaexio/vaex

started time in a month

more