profile
viewpoint

krischer/django-plugins 29

A Simple Plugin Framework for Django

barsch/seishub.core 19

SeisHub - a seismological XML/SQL database hybrid

barsch/seishub.plugins.seismology 5

Seismology package for SeisHub.

echolite/ses3d 5

Spectral-Elements 3D

barsch/seishub.plugins.exupery 3

Exupery package for SeisHub

krischer/awesome-python 3

A curated list of awesome Python frameworks, libraries, software and resources

krischer/docker_beachball_service 3

Simple Docker service for beachball images

computational-seismology/pyadjoint 2

Python package to measure misfits and calculate adjoint sources

Pull request review commentobspy/obspy

Select traces in Stream based on an Inventory

 def select(self, network=None, station=None, location=None, channel=None,         All other selection criteria that accept strings (network, station,         location) may also contain Unix style wildcards (``*``, ``?``, ...).         """+        if inventory is not None:+            trace_ids = []+            start_dates = []+            end_dates = []+            for net in inventory.networks:+                for sta in net.stations:+                    for chan in sta.channels:+                        id = '.'.join((net.code, sta.code,+                                       chan.location_code, chan.code))+                        trace_ids.append(id)+                        start_dates.append(chan.start_date)+                        end_dates.append(chan.end_date)+            traces = []+            for trace in self:+                idx = 0+                while True:+                    try:+                        idx = trace_ids.index(trace.id, idx)+                        start_date = start_dates[idx]+                        end_date = end_dates[idx]+                        idx += 1+                        if start_date is not None and\+                                trace.stats.starttime < start_date:+                            continue+                        if end_date is not None and\+                                trace.stats.endtime > end_date:+                            continue+                        traces.append(trace)+                    except ValueError:+                        break+            return self.__class__(traces=traces)

I shared @megies opinion. This would be really confusing to me and one passes multiple criteria I would intuitively expect them all to be applied.

claudiodsf

comment created time in 11 days

issue closedkrischer/pyadjoint

A typo for writing adjoint source in ASDF format

https://github.com/krischer/pyadjoint/blob/e42f24acd46a981b85a57ca4039313e1886f1223/src/pyadjoint/adjoint_source.py#L190

Hi Lion, It seems that this line should be changed to specfem_adj_source[:, 0] += time_offset.

closed time in 12 days

ziyixi

issue commentkrischer/pyadjoint

A typo for writing adjoint source in ASDF format

You are of course right. Fixed in 8a20e47bdd2796ff1e8266cf5a9d2fce8e084e9a

Thanks for reporting!

ziyixi

comment created time in 12 days

push eventkrischer/pyadjoint

Lion Krischer

commit sha 8a20e47bdd2796ff1e8266cf5a9d2fce8e084e9a

Fixing the offset timing.

view details

push time in 12 days

pull request commentobspy/obspy

Continuation of #2206

I just gave this another quick read and nothing immediately struck me. So I think this is as ready to merge as its going to get.

krischer

comment created time in 12 days

issue commentobspy/obspy

Write SEGY - Extended Textual File Header

@dbpodrasky I also don't have time to look at this right now but this might come in handy at a certain point on the future so if you just upload the file in any case would be great. It also should be pretty straightforward to add support for it to ObsPy if you want to give it a shot.

LaureLa

comment created time in 12 days

push eventSeismicData/pyasdf

Travis

commit sha e72f63f106ae3946c10bae16487383f83e08d459

Travis build 181 pushed to gh-pages

view details

push time in a month

push eventSeismicData/pyasdf

Zlatan Vasović

commit sha 087d9e46f8cf65a89c0db9020b6be1bf01f632ac

Use bash executable instead of sh

view details

Lion Krischer

commit sha db9fd2be16f09eb6fd4aa17995291c9d0317ebcb

Merge pull request #58 from zdroid/patch-1 Use bash executable instead of sh

view details

push time in a month

PR merged SeismicData/pyasdf

Use bash executable instead of sh

#!/bin/bash should always be used over #!/bin/sh, some reasons provided here.

+1 -1

1 comment

1 changed file

zdroid

pr closed time in a month

pull request commentSeismicData/pyasdf

Use bash executable instead of sh

I'm not sure I agree with the general reasoning in the SO thread and the conclusion to always use bash but in this particular case it is definitely more correct as the script is also invoked using bash in the main travis config file.

Thanks a bunch!

zdroid

comment created time in a month

issue closedobspy/obspy

Update libmseed

It's time to update https://github.com/iris-edu/libmseed . This would fix recent compilation errors due to off_t.

Changelog

closed time in a month

baerbock

issue commentobspy/obspy

Update libmseed

Duplicate of #2376

libmseed 3 has a fundamentally different API and also supports a new version of MiniSEED. The current libmseed we bind is such an integral part of ObsPy that we could only change it by being really careful and with a lot of testing. That being said we'll def do it at one point.

baerbock

comment created time in a month

pull request commentobspy/obspy

Continuation of #2206

I applied @ThomasLecocq 's fix and also moved some things to the private namespace so we don't commit to keeping them around longterm for external use. I think many of these things might become obsolete with the next generation miniseed but we'll see what happens. Please let me know if you disagree with marking these things private.

Our policy usually is that we (within reason) commit to keep all public APIs stable but free to internally modify private APIs (e.g. everything prefixed with an underscore).

If CI passes this is good enough to merge from my point of view.

krischer

comment created time in 2 months

pull request commentobspy/obspy

Proposed API for new Client & Indexer submodules based on IRIS time series index

Continued in #2511 as something happened with the base branch here.

chad-iris

comment created time in 2 months

PR opened obspy/obspy

Continuation of #2206 .clients.filesystem

This is a continuation of #2206. Don't really know what happened to the base branch there.

+3481 -6

0 comment

11 changed files

pr created time in 2 months

create barnchobspy/obspy

branch : tsindex

created branch time in 2 months

pull request commentobspy/obspy

Proposed API for new Client & Indexer submodules based on IRIS time series index

@ThomasLecocq Thanks!

chad-iris

comment created time in 2 months

pull request commentobspy/obspy

Proposed API for new Client & Indexer submodules based on IRIS time series index

I should have access to a windows machine tomorrow so I'll try to do it then. Just too tedious with CI delay time of a few hours. @ThomasLecocq Any chance you can have a look and make sure the tests pass on windows?

chad-iris

comment created time in 2 months

delete branch obspy/obspy

delete branch : krischer/write-zero-sampling-rate-mseed

delete time in 2 months

push eventobspy/obspy

Lion Krischer

commit sha 6ff7a4eef07d3a604e1acd12cff1b4be1b76e92e

Adding test for reading/writing zero-sampling rate mseed files.

view details

Lion Krischer

commit sha ba485cf9dfa8335c6d777dde60b6d022d7dc4ed5

Allow writing zero sampling rate channels.

view details

Lion Krischer

commit sha 42e3b072ad9db63d030201dd0bd361a7f9893ec5

Changelog.

view details

Lion Krischer

commit sha bb38d652aee287c7fad3cc627f910c02e28751de

Merge pull request #2509 from obspy/krischer/write-zero-sampling-rate-mseed Allow writing of zero-sampling rate MiniSEED files

view details

push time in 2 months

PR merged obspy/obspy

Allow writing of zero-sampling rate MiniSEED files .io.mseed

Fixes #2488

Very simple fix as proposed by @megies. Also includes a test.

+25 -2

2 comments

3 changed files

krischer

pr closed time in 2 months

issue closedobspy/obspy

Cannot write zero sample rate trace data as miniSEED

Description of bug ObsPy cannot write zero sample rate miniSEED. This is important for non time series data, such as LOG channels.

To reproduce the bug 1.) Download some zero sample rate LOG data from the DMC.

e.g. http://service.iris.edu/fdsnws/dataselect/1/query?network=YW&channel=LOG&starttime=2016-01-01&endtime=2016-12-31&nodata=404

2.) Read this 0 sample rate data using ObsPy

import obspy
st = obspy.read("fdsnws-dataselect_2019-10-23t18_11_00z.mseed")

3.) Try to write this data as miniSEED

st.write('fdsnws-dataselect_2019-10-23t18_11_00z.mseed', format='MSEED')

resulting in the following exception

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "//anaconda2/lib/python2.7/site-packages/obspy/core/stream.py", line 1443, in write
    write_format(self, filename, **kwargs)
  File "//anaconda2/lib/python2.7/site-packages/obspy/io/mseed/core.py", line 626, in _write_mseed
    (1.0 / trace.stats.sampling_rate * HPTMODULUS) % 100 != 0:
ZeroDivisionError: float division by zero

System info

  • ObsPy version: 1.1.1
  • Python version: 2.7
  • Platform: OSX
  • ObsPy was installed using Anaconda

closed time in 2 months

nick-iris

pull request commentobspy/obspy

Allow writing of zero-sampling rate MiniSEED files

Done.

krischer

comment created time in 2 months

push eventobspy/obspy

Lion Krischer

commit sha 42e3b072ad9db63d030201dd0bd361a7f9893ec5

Changelog.

view details

push time in 2 months

issue commentobspy/obspy

Cannot write zero sample rate trace data as miniSEED

#2509 already exists ;-P

nick-iris

comment created time in 2 months

push eventnick-iris/obspy

Tobias Megies

commit sha ef3c2a4dd6fda58caa026ddd0f0e596ed6be50e8

beachball: try to fix some plotting issues, apparently caused by missing normalization and large numbers

view details

Wayne Crawford

commit sha 98b71cb800c10aabe2258641a8e00941e9a030c6

Debugged io/nordic additions and made correspond to obspy coding style

view details

Wayne Crawford

commit sha 0be6c559c9e42bab2e7dce4806ad55d0d508c182

Update CONTRIBUTORS.txt

view details

Wayne Crawford

commit sha e25c1ae0666b8250840ec1578a2c4719bf7f9bca

Added _ellipse.to_cov() and .to_uncerts() methods. Added test_uncert_ellipse to test several cases and compare forward and reverse ellipse calculations.

view details

Tobias Megies

commit sha 39f9a9cc52954686384b2ed402ab517740fe68b3

seedlink client: add option to fetch available stations

view details

Tobias Megies

commit sha e79d316d1c942eb434e7c734edd83705446f6c5e

seedlink: adjust one test case, succeeding on any offset is a passed test

view details

Tobias Megies

commit sha b62bbf58a4f5e033d0fd6ea228722df11fb01e3f

seedlink basic client: reinit client before all requests it's kinda hard to figure out how to best reset all options in the underlying seedlink client, so for now, just make a new client from scratch every time.

view details

Tobias Megies

commit sha da4197755c9f0116e77796ec3ea071de84f917aa

seedlink: use default port 18000 if not specified in connection address

view details

Tobias Megies

commit sha 129a7e8e0291bc2e22dc4d8cdabed707ca800324

seedlink: refactoring

view details

Tobias Megies

commit sha 36fdb90103447aa63454dfca6d6c615b5244fef1

seedlink: further work on station info request

view details

Tobias Megies

commit sha 618f304bbbd641f8f5a39a7b54f11c6a4308787a

seedlink: enable '*' wildcards in waveform requests using the new station info request

view details

Tobias Megies

commit sha e07b894d62522146abd20765ec7261133674580a

seedlink: fix info request fnmatching

view details

Tobias Megies

commit sha 519b283cc8c96071a6446027546f2a535f11378f

seedlink: add test for new functionality

view details

Tobias Megies

commit sha 82e948c47269de3140174cb3a3f119a40b53941d

changelog

view details

Tobias Megies

commit sha f85b4a0e2a9c9d2187194927272d85f6262a9a14

fix docstring

view details

Wayne Crawford

commit sha f0a08fe1dc0a50dd33798cbfdf1b840d3266ecd6

Added several tests for ellipse class, set latitude_errors, longitude_errors and depth_errors; used more robust _float_conv() and _int_conv() rather than float() and int() when reading from text files. Made nordic/ellipse.py docstrings consistent with obspy

view details

Wayne Crawford

commit sha dbfc88900b9b03491a8bd56dd9cb02e3e6ecfc68

FLAKE8 for core.py

view details

Wayne Crawford

commit sha 969f16cd66424931fd150b25351dc0392ba8a959

CIRCLECI

view details

Wayne Crawford

commit sha c5739d925d2f86507ae219cfebac75567d592c7f

CIRCLECI

view details

Wayne Crawford

commit sha 35859aaa45461495372c9161b7b9f09597886443

Completed ellipse plotting routines and added tests for them

view details

push time in 2 months

PR opened obspy/obspy

Allow writing of zero-sampling rate MiniSEED files .io.mseed

Fixes #2488

Very simple fix as proposed by @megies. Also includes a test.

+24 -2

0 comment

2 changed files

pr created time in 2 months

create barnchobspy/obspy

branch : krischer/write-zero-sampling-rate-mseed

created branch time in 2 months

delete branch krischer/seismo_live

delete branch : gh-pages

delete time in 2 months

push eventkrischer/seismo_live

Lion Krischer

commit sha 8c78544c5322f541d44ffbd11a8c88234f138db5

Attempting to get direct links to all notebooks.

view details

Lion Krischer

commit sha fcc615aee965bc297e8d53da5692abb2ecd6fd0c

Correcting the paths.

view details

push time in 2 months

push eventkrischer/seismo_live_build

Lion Krischer

commit sha e4e8e59d9bf1b020e13ac91c0707eb907b05b34f

Update.

view details

push time in 2 months

push eventkrischer/seismo_live_build

Lion Krischer

commit sha 85f73b2c667d80a545f481fa5875aaf8dea9c19f

Updating

view details

push time in 2 months

push eventkrischer/seismo_live_build

Lion Krischer

commit sha 7afee6fc702d0aa5f61089a79cd6fb925ac7daa2

Updating.

view details

push time in 2 months

push eventkrischer/seismo_live_build

Lion Krischer

commit sha 5fb07395974eabe468ec85ab81cb96884f6bc099

Add the dockerfile to the build repository.

view details

push time in 2 months

push eventkrischer/seismo_live

sebnoe

commit sha 9ad75b891dbb4aff8d513827884ced097a21c4b0

Create environment.yml

view details

sebnoe

commit sha 74ecc9f0066d2bb3999ca38b4a16ef97f7b6354b

Merge pull request #32 from sebnoe/patch-1 Create environment.yml

view details

sebnoe

commit sha ba5a04f0912cbbcb1aec81e93c312ec2e2510fcb

Update environment.yml Try other environment

view details

sebnoe

commit sha e3bf4c30ba5e04c75b7f109e2f1570d547c89556

Update environment.yml Updates in pip

view details

Tobias Megies

commit sha 5680dc89809102c57736f1c63dcdac013fc4eb69

try set up using pre-built Docker image

view details

Tobias Megies

commit sha b7f4f1afd709b5f127ccafac110569f37cd3cd3c

Dockerfile: need to specify hash tag of image

view details

Tobias Megies

commit sha 6d76c67e715ceb2bea0fa5ce5ea0271f6d049beb

Dockerfile: update notebooks via git

view details

Tobias Megies

commit sha 4be893cf9f386c0b0d08b9eb827f83e6a1d20922

Docker file: try to work around proj conda package/ing issues

view details

Tobias Megies

commit sha e616c3bee0c5a6d915f9d5f1a1fa2b1e9975fb5d

binder/docker setup: add instructions for building the base image

view details

Tobias Megies

commit sha 6a233dbb96e741d5e7ac8a57eb58e21a69d692e9

dockerfile: add comment on base image target

view details

Tobias Megies

commit sha 5991f215569a3430ee37d54c62fd2c82e5845b9a

docker setup: move some final touches to postBuild make life easier for binder and avoid these steps potentially being cached

view details

Tobias Megies

commit sha 41f2e2ffb4d97c4f47c0e5811e67d96e3c9fc6a9

docker setup: move the env variable setup to "start"

view details

Tobias Megies

commit sha 1745c200a5d1a25d2071ac53354a3c720633ffa3

docker setup: looks like all setup code should go into "start" script

view details

Tobias Megies

commit sha 719f39c5fb583117dcebd73c74e53ad9d91c7caa

docker setup: rename start, has to be docker entrypoint actually

view details

Tobias Megies

commit sha 4c988dbb500b440e97168b64c264683656be2a97

main dockerfile: set up entrypoint to do final touches after docker build

view details

Tobias Megies

commit sha 5acfa7c99d24c0d5e83d645a0f4f44cd33bcaafa

docker: try to fixstartup / entrypoint setup

view details

Tobias Megies

commit sha 08208a48ed0422e3d84af4cd51f6e1388cfb6adf

docker setup: remove old Makefile of tmpnbserver

view details

Tobias Megies

commit sha dc62cc3f938c007fcf9fcc108dccf99af6f7c966

docker: try to fix startup logic

view details

Tobias Megies

commit sha e37772e58de779eade65ab1b58bec64dfbcf2f9f

docker: try to fix startup

view details

Tobias Megies

commit sha eaa3c1f71810b4b6824b40272a6ae0b65ff91dc3

docker: try to fix startup

view details

push time in 2 months

PR merged krischer/seismo_live

Binder migration: New docker based setup with self-contained and stable pre-built base image on docker hub

What this does..

  • adds instructions and scripts to build a fully self-contained docker base image for binder
    • once everything is in place, base image should only need updating on major changes to seismo-live (e.g. when some notebooks need newer dependency versions, or new obspy released, etc...)
    • to include specific data(sets) for use in notebooks, the dockerfile should be further modified and the base image rebuilt
  • modifies the root directory Dockerfile, which will fully rely on the above pre-built docker base image. only minimal steps should be in this file (like updating the notebooks in the base image)

The state of the PR can be tested on mybinder.org at this URL:

https://mybinder.org/v2/gh/krischer/seismo_live/binder_dockerfile

The only remaining issue is if/how to update notebooks when starting the docker base image. See the explanations at the bottom of README.md. Currently, the state of the notebooks will not be automatically updated when starting the docker image on binder. For debugging/testing, it is possible to $ git pull after starting a terminal in the jupyter session on binder. To update the notebooks, building a new base image seems easiest right now, although in principle the pre-notebook-start hooks (explained in README.md) should be able to do exactly that.

n.b. the PR commit history is rather messy due to a lot of CI debugging (without logs being available), but it might be useful in the future to retrace different approaches that were explored, so it might be good to keep it.

+663 -35

1 comment

5 changed files

megies

pr closed time in 2 months

push eventkrischer/seismo_live

Lion Krischer

commit sha b0b1a05370b9badbc470312051ac4abe1cad10b8

Converting everything to jupytext files.

view details

Lion Krischer

commit sha 1368412595677abad50629ab556fa9a15208086d

Beginnings of conversions script.

view details

Lion Krischer

commit sha 58c331ed87f1326d811aae6a028d3fd0ddecaf32

Renaming all solution notebooks consistently to '.._solution.py'

view details

Lion Krischer

commit sha d1a283ebf45c5715fc016a62f299b12e72e074b1

Remove all no solution files.

view details

Lion Krischer

commit sha c2a3417a087db0c1723fae8caa9a56117f9fb64c

Removing all __future__ imports.

view details

Lion Krischer

commit sha f2d980b40d31b197cef29b77d960779d17bba745

Converting also to HTML.

view details

Lion Krischer

commit sha 3bd6f3445478a4a47b1c8312693a39863804252b

Strip solutions.

view details

Lion Krischer

commit sha 332ae3598daf08842874db8569a19ac9b2ef1a0f

Updating script.

view details

Lion Krischer

commit sha f69d5cff38f0acdf3e1c021261b50e6255c40903

Adding example azure pipelines.

view details

Lion Krischer

commit sha d9df5a21982fe95559c875ec8fea697647d91570

Modifying azure pipelines.

view details

Lion Krischer

commit sha 5654f383219f869cdbe862e6d857905ab1f8f90c

Setting python version in azure pipelines.

view details

Lion Krischer

commit sha cb73f561248010ef2facd5e28b3314425ee1d049

Install jupyter client.

view details

Lion Krischer

commit sha d23a8f2f656c259b44f824822b58cea3f2554363

Try to install jupyter instead of jupyter_client.

view details

Lion Krischer

commit sha b22df2e059a49630ad82c110fc59817080cbf8f6

installing obspy.

view details

Lion Krischer

commit sha 227e6d97c8c5207dafc70c7d62fcc9f398687e0d

Needs to build obspy from source.

view details

sebnoe

commit sha 97da3135e9fbb4f944a2f6cbe89e3500e2f77a7c

Solution tags (#35) * added tags for solution-cells * Add solution tags * Adding solution tags * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Delete 06_FDSN-with_solutions.py * Delete 06_FDSN.py * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing testing code block. * Delete fd_ac3d_homogeneous_solution.py * Delete fd_ac2d_homogeneous_solution.py * Delete fd_ac2d_heterogeneous_solution.py * Delete fd_ac1d_with_solutions.py * Delete ac1d_optimal_operator_solution.py * Delete fd_ac1d.py * Rename fd_ac3d_homogeneous_solutions.py to fd_ac3d_homogeneous_solution.py * Rename fd_ac2d_homogeneous_solutions.py to fd_ac2d_homogeneous_solution.py * Rename fd_ac2d_heterogeneous_solutions.py to fd_ac2d_heterogeneous_solution.py * Rename fd_ac1d_solutions.py to fd_ac1d_solution.py * Rename ac1d_optimal_operator_solutions.py to ac1d_optimal_operator_solution.py * Strip unfinished code from solution-notebook * Create environment.yml * Create requirements.txt * Delete Dockerfile * Delete requirements.txt * Delete environment.yml * Create Dockerfile * Update Dockerfile

view details

Lion Krischer

commit sha 2e2fcbd28382a77fb2774e0b8eaeb984de00e903

Fixing a few obvious things.

view details

Lion Krischer

commit sha 8c0bb4db221525919af7fd559295bf0cf9c97709

Ability to continue the conversion if it previously stopped or failed.

view details

Lion Krischer

commit sha 78ceb19f6661c9722f42335af946f18b52781a3e

Making sure everything can be executed in the crash course notebook.

view details

sebnoe

commit sha fb09b8f165bdd9c12ce787b2d45251d4ee94e37f

new notebooks in signal processing

view details

push time in 2 months

push eventkrischer/seismo_live

Lion Krischer

commit sha b0b1a05370b9badbc470312051ac4abe1cad10b8

Converting everything to jupytext files.

view details

Lion Krischer

commit sha 1368412595677abad50629ab556fa9a15208086d

Beginnings of conversions script.

view details

Lion Krischer

commit sha 58c331ed87f1326d811aae6a028d3fd0ddecaf32

Renaming all solution notebooks consistently to '.._solution.py'

view details

Lion Krischer

commit sha d1a283ebf45c5715fc016a62f299b12e72e074b1

Remove all no solution files.

view details

Lion Krischer

commit sha c2a3417a087db0c1723fae8caa9a56117f9fb64c

Removing all __future__ imports.

view details

Lion Krischer

commit sha f2d980b40d31b197cef29b77d960779d17bba745

Converting also to HTML.

view details

Lion Krischer

commit sha 3bd6f3445478a4a47b1c8312693a39863804252b

Strip solutions.

view details

Lion Krischer

commit sha 332ae3598daf08842874db8569a19ac9b2ef1a0f

Updating script.

view details

Lion Krischer

commit sha f69d5cff38f0acdf3e1c021261b50e6255c40903

Adding example azure pipelines.

view details

Lion Krischer

commit sha d9df5a21982fe95559c875ec8fea697647d91570

Modifying azure pipelines.

view details

Lion Krischer

commit sha 5654f383219f869cdbe862e6d857905ab1f8f90c

Setting python version in azure pipelines.

view details

Lion Krischer

commit sha cb73f561248010ef2facd5e28b3314425ee1d049

Install jupyter client.

view details

Lion Krischer

commit sha d23a8f2f656c259b44f824822b58cea3f2554363

Try to install jupyter instead of jupyter_client.

view details

Lion Krischer

commit sha b22df2e059a49630ad82c110fc59817080cbf8f6

installing obspy.

view details

Lion Krischer

commit sha 227e6d97c8c5207dafc70c7d62fcc9f398687e0d

Needs to build obspy from source.

view details

sebnoe

commit sha 97da3135e9fbb4f944a2f6cbe89e3500e2f77a7c

Solution tags (#35) * added tags for solution-cells * Add solution tags * Adding solution tags * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Delete 06_FDSN-with_solutions.py * Delete 06_FDSN.py * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing testing code block. * Delete fd_ac3d_homogeneous_solution.py * Delete fd_ac2d_homogeneous_solution.py * Delete fd_ac2d_heterogeneous_solution.py * Delete fd_ac1d_with_solutions.py * Delete ac1d_optimal_operator_solution.py * Delete fd_ac1d.py * Rename fd_ac3d_homogeneous_solutions.py to fd_ac3d_homogeneous_solution.py * Rename fd_ac2d_homogeneous_solutions.py to fd_ac2d_homogeneous_solution.py * Rename fd_ac2d_heterogeneous_solutions.py to fd_ac2d_heterogeneous_solution.py * Rename fd_ac1d_solutions.py to fd_ac1d_solution.py * Rename ac1d_optimal_operator_solutions.py to ac1d_optimal_operator_solution.py * Strip unfinished code from solution-notebook * Create environment.yml * Create requirements.txt * Delete Dockerfile * Delete requirements.txt * Delete environment.yml * Create Dockerfile * Update Dockerfile

view details

Lion Krischer

commit sha 2e2fcbd28382a77fb2774e0b8eaeb984de00e903

Fixing a few obvious things.

view details

Lion Krischer

commit sha 8c0bb4db221525919af7fd559295bf0cf9c97709

Ability to continue the conversion if it previously stopped or failed.

view details

Lion Krischer

commit sha 78ceb19f6661c9722f42335af946f18b52781a3e

Making sure everything can be executed in the crash course notebook.

view details

sebnoe

commit sha fb09b8f165bdd9c12ce787b2d45251d4ee94e37f

new notebooks in signal processing

view details

push time in 2 months

PR merged krischer/seismo_live

Convert everything to Jupytext

Also the beginnings of a new workflow how to do things.

+0 -0

4 comments

0 changed file

krischer

pr closed time in 2 months

push eventobspy/docs

Lion Krischer

commit sha 8819227c888fc36af2c4720e3ea217c49f0551b2

Adding poster source files - please don't judge me

view details

push time in 2 months

push eventkrischer/seismo_live

Lion Krischer

commit sha 63c306929e3b9451b16439ad02f025a1b3424e00

Changing sensor.

view details

Lion Krischer

commit sha 24845425ffc103f15816550d16ba8978cee13751

Fixing a few more notebooks.

view details

Lion Krischer

commit sha 85f0741f82915646462bcc671c3d08f18b56d2c6

A lot more notebooks work now.

view details

Lion Krischer

commit sha 7f4bd741784db1a0b5da3af419132641f2ca35b7

Getting a bunch more notebooks to pass.

view details

Lion Krischer

commit sha f6607eae9b4a6f4a948763780440786f38a905e6

Towards statically building the new website.

view details

Lion Krischer

commit sha d1e2d96b9e70b1f638caeabb6c4b31fecf5e6955

Building a proper website out of it.

view details

Lion Krischer

commit sha cfa0082e1c436a7de61232b3e1459f6cab1a8f3f

Embedding pretty navbar into each statically rendered website.

view details

Lion Krischer

commit sha 739baebf8b047a8dc9b8c6cb5f55cb1fb21cc80e

All notebooks now do something.

view details

Lion Krischer

commit sha 797cc2e411ffc15c857883c9bf80d2e2b23b4a52

Make sure the whole box is a link.

view details

Lion Krischer

commit sha fc14c094757721093872845bd2cd8c87d31cb723

Making some links relative.

view details

Lion Krischer

commit sha 0510664d4d8d2259e675536c911df5d9c0310e99

Also use relative imports for the style sheets in the wrapper.

view details

Lion Krischer

commit sha 6583b804e62b30987655a7166f896f2a40bbb45b

Code formatting.

view details

Lion Krischer

commit sha a3f7b0a4902e447876358d4092d608bb2f73d4f5

Documentation.

view details

push time in 2 months

push eventkrischer/seismo_live_build

Lion Krischer

commit sha f6b2a3f42770a6626d9ed75f715e6abd9804c195

Use relative links also for the style sheet.

view details

push time in 2 months

push eventkrischer/seismo_live_build

Lion Krischer

commit sha fb36e9bd798338b19debf9d273458d6713ee200f

Updating.

view details

push time in 2 months

push eventkrischer/seismo_live_build

Lion Krischer

commit sha 69359f5645ec1f645c4409e2c74c19defd56c994

Adding nojekyll file.

view details

push time in 2 months

create barnchkrischer/seismo_live_build

branch : master

created branch time in 2 months

created repositorykrischer/seismo_live_build

Actual repository: https://github.com/krischer/seismo_live

created time in 2 months

pull request commentkrischer/seismo_live

Convert everything to Jupytext

@sebnoe It would also be helpful if you branch off here and work on a different branch and we merge once everything is ready. Multiple people working on a single branch usually results in conflicts that have to be resolved all the time. If we work on different branches we only have to do it once.

krischer

comment created time in 3 months

issue commentSeismicData/SEIS-PROV

KUDOS and the question(s)

That sounds pretty nice and useful indeed! And I totally agree that container based workflows are the most useful and practical choice to get things done. And one could certainly argue that a VCS introduces a graph structure into your system ;-)

SEIS-PROV tried to establish something more akin to a system independent "light" provenance. I've actually written a few sentences of its original goals at the end of this page: http://seismicdata.github.io/SEIS-PROV/motivation.html#goal-of-seis-prov

yarikoptic

comment created time in 3 months

issue closedobspy/obspy

Bug in remove_response? (not including sensor Dip?)

Hi all,

I think I might have found a bug in the remove_response function - or I am doing something stupid. Basically I have the feeling it's not taking into account the Dip of the respective channel (or I don't know where in stationxml the Dip is defined). Here's a quick example that shows the behaviour:

from obspy.clients.fdsn import Client
from obspy import UTCDateTime
import matplotlib.pyplot as plt

fdsn = Client('ODC')
t1 = UTCDateTime('2018-05-09T10:48:00')
t2 = UTCDateTime('2018-05-09T10:55:00')

data = fdsn.get_waveforms(network='OE', station='CONA', location='', channel='HHZ', starttime=t1, endtime=t2)
inv = fdsn.get_stations(network='OE', station='CONA', location='', channel='HHZ', starttime=t1, endtime=t2, level='response')
inv_minus = fdsn.get_stations(network='OE', station='CONA', location='', channel='HHZ', starttime=t1, endtime=t2, level='response')

inv[0][0][0].dip = 90.0
inv_minus[0][0][0].dip = -90.0

data_deconv = data.copy()
data_deconv_minus = data.copy()

data_deconv.remove_response(inventory=inv, output='VEL')
data_deconv_minus.remove_response(inventory=inv_minus, output='VEL')

plt.plot(data_deconv[0].data, label='90')
plt.plot(data_deconv_minus[0].data, label='-90')
plt.legend()
plt.show()

If the data is plotted, it is identical, even though it should have opposite sign ...

Any thoughts? Is the dip of the channel specified elsewhere in the stationXML?

Best regards, Florian

closed time in 3 months

flofux

issue commentobspy/obspy

Bug in remove_response? (not including sensor Dip?)

I guess that depends on the point of view ;-) It would be interesting if remove response would also rotate but many people would probably not expect that.

flofux

comment created time in 3 months

push eventconda-forge/mtspec-feedstock

regro-cf-autotick-bot

commit sha 6148561fc8c98cef11109963af0b9bbc489051ea

bump build number

view details

regro-cf-autotick-bot

commit sha 24134f6e2c10711efa323500f609227e0181dd54

MNT: Re-rendered with conda-build 3.18.11, conda-smithy 3.6.1, and conda-forge-pinning 2019.11.01

view details

Lion Krischer

commit sha c6a2580d81e5a4f6e4f3483b9cd6b9427b202ee8

Merge pull request #11 from regro-cf-autotick-bot/rebuild-python3801 Rebuild for python38

view details

push time in 3 months

PR merged conda-forge/mtspec-feedstock

Reviewers
Rebuild for python38

This PR has been triggered in an effort to update python38.

Notes and instructions for merging this PR:

  1. Please merge the PR only after the tests have passed.
  2. Feel free to push to the bot's branch to update this PR if needed. Please note that if you close this PR we presume that the feedstock has been rebuilt, so if you are going to perform the rebuild yourself don't close this PR until the your rebuild has been merged.

This package has the following downstream children:

And potentially more. If this PR was opened in error or needs to be updated please add the bot-rerun label to this PR. The bot will close this PR and schedule another one.

<sub>This PR was created by the cf-regro-autotick-bot. The cf-regro-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. If you would like a local version of this bot, you might consider using rever. Rever is a tool for automating software releases and forms the backbone of the bot's conda-forge PRing capability. Rever is both conda (conda install -c conda-forge rever) and pip (pip install re-ver) installable. Finally, feel free to drop us a line if there are any issues! This PR was generated by https://circleci.com/gh/regro/circle_worker/11875, please use this URL for debugging</sub>

+123 -40

1 comment

12 changed files

regro-cf-autotick-bot

pr closed time in 3 months

issue commentobspy/obspy

Bug in remove_response? (not including sensor Dip?)

Hi Florian,

the remote_response() function only deconvolves the instrument response, it does not do anything else. To rotate you'll have to use .rotate() with the ->ZNE function: https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.rotate.html

This requires all three components to be available but otherwise it could only handle trivial cases.

flofux

comment created time in 3 months

issue commentSeismicData/SEIS-PROV

KUDOS and the question(s)

Hi Yarik,

thanks for the nice words!

I'm not entirely sure I'd consider SEIS-PROV a success - people always are very interested in it but as you've noticed there is little actual activity going on. That being said ASDF is being used a fair bunch and continues to grow so maybe SEIS-PROV will be rejuvenated at one point in time.

PROV (like the rest of semantic web markup) is not really easily digestable by humans with all the random IDs etc. json-ld and other serializations made things better but not really easy. That is why often there is a seductive power of "let's just come up with some schema which could be compatible with PROV, i.e. that we could convert to PROV representation if needed; or which would just be more useful to humans instead of computers". Do you still feel that "native" PROV in ASDF was the way to go?

Yes - I still think that "native" PROV is the way to go. Any non-trivial provenance description will become a directed graph. As soon as there is a graph there must be unique ids of some form and that point it can really only be understood by humans by looking at a graphical representation. At that point I no longer care too much about the data format representation and using an existing standard is IMHO always the right choice as it comes with libraries and tools to do these visualizations (amongst other things).

And PROV-N (https://www.w3.org/TR/2013/REC-prov-n-20130430/) is easy enough to read and I cannot envision something simpler that could still represent a DAG.

Also this is very easy to implement while still being powerful. You might have noticed that ASDF embeds all kinds of XML formats so we've chosen the PROV XML representation, but embedding others would also work. We chose to directly embed the encoded byte representation of XML files - this might seem ugly and cumbersome but it has the big advantage that one no longer has to deal with text encodings and other nasty issues as this is all handled by the underlying (in this case XML) parsing engine. And its actually more efficient compared to storing it in deeply nested HDF5 groups and attributes which are really slow to query.

  • do higher level user tools in the field use PROV information, e.g. for visualization or querying by mere mortals for pragmatic benefit (e.g. just listing types of filtering done on the data with parameters used etc)?

  • did you see or could refer to specific pragmatic (goal driven, not just demos on "what could be done") use cases / studies / benefits from having PROV in ASDF?

I'll answer these two together. In our field, like I imagine in others, provenance and reproducibility are things everyone likes talking about but few people actually tackle it in generally useful ways.

Thus no to both. It has not happened yet and I feel like it would only happen if all provenance acquisition and storage would happen fully automatically without ANY additional work and friction for scientists and users.

The closest I got to this was to implement automatic provenance tracking in ObsPy (https://github.com/obspy/obspy/wiki), a standard tool in our field. It is not merged because a few edge cases need some more work but maybe I finish it at one point.

It is probably possible to implement something similar for your use case. Some inspiration: https://github.com/krischer/obspy/blob/obspy_provenance/obspy/core/provenance.py and a decorator for all the processing methods that actually tracks the provenance: https://github.com/krischer/obspy/blob/obspy_provenance/obspy/core/trace.py#L224

Let me know if this does not help or you need more information. I'm always happy to answer questions!

yarikoptic

comment created time in 3 months

push eventkrischer/seismo_live

Lion Krischer

commit sha 2e2fcbd28382a77fb2774e0b8eaeb984de00e903

Fixing a few obvious things.

view details

Lion Krischer

commit sha 8c0bb4db221525919af7fd559295bf0cf9c97709

Ability to continue the conversion if it previously stopped or failed.

view details

Lion Krischer

commit sha 78ceb19f6661c9722f42335af946f18b52781a3e

Making sure everything can be executed in the crash course notebook.

view details

push time in 3 months

push eventconda-forge/instaseis-feedstock

regro-cf-autotick-bot

commit sha 6075b3c360b211ca25173a95777443c7c2837700

bump build number

view details

regro-cf-autotick-bot

commit sha f482f86e69ed2349f1e7a92b7a6552adebfcae30

MNT: Re-rendered with conda-smithy 3.1.12 and pinning 2018.11.24

view details

conda-forge-admin

commit sha f622bd2d65a929aef45d8aed8b4c28382ef4cb0c

MNT: Re-rendered with conda-build 3.17.8, conda-smithy 3.2.12, and conda-forge-pinning 2019.01.29

view details

conda-forge-admin

commit sha a2ee84b3951f964876cd7f0b88a3e087d7395a2c

MNT: Re-rendered with conda-build 3.18.10, conda-smithy 3.6.0, and conda-forge-pinning 2019.10.11

view details

Marius van Niekerk

commit sha 40ec5d0bb55b60193d7bac014d6b01837525792d

Update meta.yaml

view details

Lion Krischer

commit sha 07cdcd7fd626bbece17f08016e9c8159294e58de

Merge pull request #14 from regro-cf-autotick-bot/rebuild Rebuild for Python 3.7, GCC 7, R 3.5.1, openBLAS 0.3.2

view details

push time in 3 months

PR merged conda-forge/instaseis-feedstock

Rebuild for Python 3.7, GCC 7, R 3.5.1, openBLAS 0.3.2

It is likely this feedstock needs to be rebuilt. Notes and instructions for merging this PR:

  1. Please merge the PR only after the tests have passed.
  2. Feel free to push to the bot's branch to update this PR if needed.

Please note that if you close this PR we presume that the feedstock has been rebuilt, so if you are going to perform the rebuild yourself don't close this PR until the your rebuild has been merged.

This package has the following downstream children:

And potentially more.

If this PR was opened in error or needs to be updated please add the bot-rerun label to this PR. The bot will close this PR and schedule another one.

<sub>This PR was created by the cf-regro-autotick-bot. The cf-regro-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. If you would like a local version of this bot, you might consider using rever. Rever is a tool for automating software releases and forms the backbone of the bot's conda-forge PRing capability. Rever is both conda (conda install -c conda-forge rever) and pip (pip install re-ver) installable. Finally, feel free to drop us a line if there are any issues!</sub>

+511 -318

7 comments

28 changed files

regro-cf-autotick-bot

pr closed time in 3 months

push eventkrischer/seismo_live

sebnoe

commit sha 97da3135e9fbb4f944a2f6cbe89e3500e2f77a7c

Solution tags (#35) * added tags for solution-cells * Add solution tags * Adding solution tags * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Add files via upload * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Delete 06_FDSN-with_solutions.py * Delete 06_FDSN.py * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing all __future__ imports. * Removing testing code block. * Delete fd_ac3d_homogeneous_solution.py * Delete fd_ac2d_homogeneous_solution.py * Delete fd_ac2d_heterogeneous_solution.py * Delete fd_ac1d_with_solutions.py * Delete ac1d_optimal_operator_solution.py * Delete fd_ac1d.py * Rename fd_ac3d_homogeneous_solutions.py to fd_ac3d_homogeneous_solution.py * Rename fd_ac2d_homogeneous_solutions.py to fd_ac2d_homogeneous_solution.py * Rename fd_ac2d_heterogeneous_solutions.py to fd_ac2d_heterogeneous_solution.py * Rename fd_ac1d_solutions.py to fd_ac1d_solution.py * Rename ac1d_optimal_operator_solutions.py to ac1d_optimal_operator_solution.py * Strip unfinished code from solution-notebook * Create environment.yml * Create requirements.txt * Delete Dockerfile * Delete requirements.txt * Delete environment.yml * Create Dockerfile * Update Dockerfile

view details

push time in 3 months

PR merged krischer/seismo_live

Reviewers
Solution tags

Added solution tags wherever necessary in solution-notebooks. It should be possible now to create the no-solution notebooks.

+10344 -9930

0 comment

38 changed files

sebnoe

pr closed time in 3 months

issue commentSeismicData/pyasdf

Slow performance when adding gappy data to ASDF

I think this PR (if appended a bit) would be a good solution for this particular problem: https://github.com/SeismicData/pyasdf/pull/49

chad-iris

comment created time in 3 months

push eventobspy/obspy

trichter

commit sha 935c6a5265d0d4060c34984089bcbc028e8372ad

add Stream.stack method

view details

trichter

commit sha 6e5f1b65e38b763e692b601a2e6a7bf732e7a166

change default type='normal' to 'linear'

view details

trichter

commit sha 25bcf6d1aa6e6530d15e49b99310a01a0fe0fd9b

write stack_count to stats object, add documentation about metadata, pep8

view details

trichter

commit sha 7889580eaab47d6ad8216f0797a5a3ec01a3aa31

make division py2 compatible

view details

trichter

commit sha e3f7564b4fbb1adfc0878ad57d73a7e4d83ca235

cosmetics

view details

trichter

commit sha 64e565a8d73952d80df9273c0f0538992a4a0d7a

add bib file for Schimmel1997

view details

trichter

commit sha 5fc68cbc7653338d3c79f65626f2e7e4ac0bd8e6

add reference in doc string

view details

trichter

commit sha 1c9330b8a6c95b8c2d0d7e0e551c33e0d7327313

add tests, fix type check

view details

trichter

commit sha e14546e8ce98315a06c7c99033f48f4779f5a047

add changelog

view details

trichter

commit sha 9dd73eb40d8fa4d1d33da37a0f984879ff316d3e

fix doctest, fix tests for py3.5

view details

trichter

commit sha 834ff5fe4488b84ca763052efd02da2cd00d8feb

final touches to test_stack

view details

trichter

commit sha 9924f9b30fd4328c0f13515e0b37703aebf8694a

implement review suggestions

view details

trichter

commit sha 2d0e501b3e4e2ea9c60d100043d3f351b8f1211b

fix metadata test

view details

trichter

commit sha ffb0fc60bc25190c13ead2f6eb7c82a386298138

fix typo in doc string

view details

trichter

commit sha 1c7d2c144c428300204ef2f6b3b62b46915dc1e4

stack: implement krischer review * use stats.stack as AttribDict instead of adding fields individually * change names of arguments * document type on exta line * operates in-place now * add time_tol parameter * raise on different sampling_rate

view details

trichter

commit sha 1325c6724a593e8e641c6cad62e5579f96458641

update doc

view details

trichter

commit sha 237225d0394857f385e610ab183a9276e7d728b7

fix py2

view details

Lion Krischer

commit sha 79a83855c8c1498d9370c2a4e063ace256f2be27

Merge pull request #2440 from obspy/stack add Stream.stack method

view details

push time in 4 months

delete branch obspy/obspy

delete branch : stack

delete time in 4 months

PR merged obspy/obspy

add Stream.stack method .core

What does this PR do?

I added a stack method to the Stream object. Any discussion about the implementation is welcome before adding some tests.

Why was it initiated? Any relevant Issues?

See #1741, and maybe #841, #1002

PR Checklist

  • [x] Correct base branch selected? master for new features, maintenance_... for bug fixes
  • [x] This PR is not directly related to an existing issue (which has no PR yet).
  • [x] If the PR is making changes to documentation, docs pages can be built automatically. Just remove the space in the following string after the + sign: + DOCS
  • [x] If any network modules should be tested for the PR, add them as a comma separated list (e.g. clients.fdsn,clients.arclink) after the colon in the following magic string: "+TESTS:" (you can also add "ALL" to just simply run all tests across all modules)
  • [x] All tests still pass.
  • [x] Any new features or fixed regressions are be covered via new tests.
  • [x] Any new or changed features have are fully documented.
  • [x] Significant changes have been added to CHANGELOG.txt .
  • [x] First time contributors have added your name to CONTRIBUTORS.txt .
+199 -0

8 comments

4 changed files

trichter

pr closed time in 4 months

pull request commentobspy/obspy

add Stream.stack method

I think this looks good now. Thanks a lot!

trichter

comment created time in 4 months

pull request commentobspy/obspy

Fix build_taup_model for model with no discontinuity

I looked at this for quite a bit but cannot figure it out unfortunately :-( Can anyone test if such a model would work with the Java TauP package?

not522

comment created time in 4 months

pull request commentobspy/obspy

Add support for INGV DMX format

@ThomasLecocq Any chance to get this done within the next view days?

ThomasLecocq

comment created time in 4 months

pull request commentobspy/obspy

Proposed API for new Client & Indexer submodules based on IRIS time series index

I'm really sorry for taking so long for this especially as it is really well done. Thanks a lot for this!

I rebased on the latest master and made a few small changes mainly fixing some upcoming Python 3.8 deprecations. One thing that still might be nice is to include the mseedindex in one of the CI runners but we can do this at a later point.

I'll merge this once the CI passes.

chad-iris

comment created time in 4 months

push eventnick-iris/obspy

Tobias Megies

commit sha 61eab40d5769bce7629c9fec1d5109f2e2c97b96

deb/docker packaging: add ubuntu 18.10 cosmic

view details

Tobias Megies

commit sha da35f091431f34ab042db4d9ebc2e9aab151449b

deb/docker: remove one EOL ubuntu

view details

Derrick Chambers

commit sha 7096d5b4c64f231bee3e401c5095afce4e393ca5

add nll tests

view details

Derrick Chambers

commit sha 804f0f2727e94cf949697280f9d89b10ff2dba3b

add logic for more flexible time parsing

view details

derrick

commit sha 92843fe1c59978d41d94d89ff991a70a82aeb6b1

add issue 2266 tests

view details

derrick

commit sha 7833c50ceaa738d07f52024d9481baa109f7e46f

fix #2266

view details

Tobias Megies

commit sha 73d2496aff79e8099709d37ea8be793cb53a9f74

start work on new io.focmec module

view details

Tobias Megies

commit sha 48d373745abb1f67f60c9151eec137c7807538d6

io.focmec: basic reader structure

view details

Tobias Megies

commit sha 2fa21341492d940e3535528a650eab9161a8f853

io.focmec: implemented parsing common header

view details

Tobias Megies

commit sha 4ddb0ae0de1658ef6a158542801dca945f8193f9

io.focmec: basic reading of both lst nad out files works so far only reads focal mechanisms with strike/dip/rake, no additional info or comments

view details

Tobias Megies

commit sha 5b198dc4d1755c7bd690691e71db5a24f80e6816

io.focmec: add polarity count + misfit to fomecs

view details

Tobias Megies

commit sha bf2df8f3f6f9b2538e4a28608a94d15c6351992a

io.focmec: refactoring

view details

Tobias Megies

commit sha f0eda113e1fce6b3ef5338fbbaf270b2ca3f4bdb

io.focmec: refactoring

view details

Tobias Megies

commit sha 865ab9e7f201349c003b7e46cf0cddb31bf56609

io.focmec: add full info text from original file as comment for later reuse

view details

Tobias Megies

commit sha 1edd4fd7b7d1e2b009c2251fd60181513e10b138

io.focmec: add full raw original file content as comments on event / individual focal mechanisms

view details

Tobias Megies

commit sha d250f9d23ae5637b05b4a9a2a70806a27ad293cb

io.focmec: add station polarity count in out format as well

view details

Tobias Megies

commit sha e01a27f127dc392c9a444278b558d6b9fd0e4b2a

io.focmec: add computing azimuthal gap when possible (lst file)

view details

Tobias Megies

commit sha a5398445c3ac3ab7458a00a091c8593e287bc056

io.focmec: add creation info (time/program name) to focal mechanisms

view details

Tobias Megies

commit sha cbafff58fb784fbdb098e995e2b0f9a9d9b7905d

changelog

view details

Tobias Megies

commit sha a49e01b9cc0b803e6eb389aade605dce75107fa6

flake8

view details

push time in 4 months

Pull request review commentobspy/obspy

add Stream.stack method

 def remove_sensitivity(self, *args, **kwargs):             tr.remove_sensitivity(*args, **kwargs)         return self +    def stack(self, group='all', type='linear', npts_tol=0):+        """+        Return stream with traces stacked by the same selected metadata.++        :param str group: Stack waveforms together which have the same metadata

For the group parameter I would document the types as it is not obvious and it can take a string or a tuple. In general I also think its nice to be consistent within a single docstring and if a single type is documented, all should be documented.

But its all a bit moot as we'll hopefully drop Python 2 support very soon and can then just use type hints for documentation so we'll have some work to do in any case.

trichter

comment created time in 4 months

Pull request review commentobspy/obspy

add Stream.stack method

 def remove_sensitivity(self, *args, **kwargs):             tr.remove_sensitivity(*args, **kwargs)         return self +    def stack(self, group='all', type='linear', npts_tol=0):+        """+        Return stream with traces stacked by the same selected metadata.++        :param str group: Stack waveforms together which have the same metadata+            given by this parameter. The parameter should name the+            corresponding keys of the stats object,+            e.g. ``'{network}.{station}'`` for stacking all+            locations and channels of the stations and returning a stream+            consisting of one stacked trace for each station.+            This parameter can take two special values,+            ``'id'`` which stacks the waveforms by SEED id and+            ``'all'`` (default) which stacks together all traces in the stream.+        :type type: str or tuple+        :param type: Type of stack, one of the following:+            ``'linear'``: average stack (default),+            ``('pw', order)``: phase weighted stack of given order,+            see [Schimmel1997]_,+            ``('root', order)``: root stack of given order.+        :param int npts_tol: Tolerate traces with different number of points+            with a difference up to this value. Surplus samples are discarded.++        :returns: New stream object with stacked traces. The metadata of each+            trace (inlcuding starttime) corresponds to the metadata of the+            original traces if those are the same. Additionaly, the entries+            ``stack`` (result of the format operation on the group parameter)+            and ``stack_count`` (number of stacked traces)+            are written to the stats object(s).++        >>> from obspy import read+        >>> st = read()+        >>> stack = st.stack()+        >>> print(stack)  # doctest: +ELLIPSIS+        1 Trace(s) in Stream:+        BW.RJOB.. | 2009-08-24T00:20:03.000000Z - ... | 100.0 Hz, 3000 samples+        """+        if group == 'id':+            group = '{network}.{station}.{location}.{channel}'+        groups = collections.defaultdict(list)+        for tr in self:+            groups[group.format(**tr.stats)].append(tr)+        stacks = []+        for groupid, traces in groups.items():+            header = {k: v for k, v in traces[0].stats.items()+                      if all(tr.stats.get(k) == v for tr in traces)}+            header['stack'] = groupid+            header['stack_count'] = len(traces)

Yea that might work. I slightly prefer to group all the stack meta parameters which I think would look a bit nicer.

trichter

comment created time in 4 months

Pull request review commentobspy/obspy

add Stream.stack method

 def remove_sensitivity(self, *args, **kwargs):             tr.remove_sensitivity(*args, **kwargs)         return self +    def stack(self, group='all', type='linear', npts_tol=0):+        """+        Return stream with traces stacked by the same selected metadata.++        :param str group: Stack waveforms together which have the same metadata

Then you have to use :type group: str as you already did for one of the other parameters. I'm not sure if sphinx can use the syntax you originally used but we probably stick to a consistent docstyle syntax.

trichter

comment created time in 4 months

pull request commentobspy/obspy

core: Unify "_is_xxx()" file format checks regarding file handling

This is really a very much needed development! A few thoughts:

  1. Instead of a decorator I would change the plug-in interfaces, e.g. the _is_XXX(), _read_XXX(), and _write_XXX() function to only accept io.BufferedIOBase objects. Then the generic reader/writer routines would make sure to convert whatever comes in to one of these objects.
  2. This would be a bigger change and likely break all existing external file format plug-ins. Thus we'd have to move this to 2.0. But this is also much easier with Python 3 only so that might be a good thing.
megies

comment created time in 4 months

pull request commentobspy/obspy

WIP - Add support reading SC3ML v0.10; ResponseIIR & decimation on PAZ

Hey guys - any change you'll get to this in the next few days? Otherwise we'd move it to the next release.

Jollyfant

comment created time in 4 months

delete branch obspy/obspy

delete branch : mpl_backend_context_closing

delete time in 4 months

push eventobspy/obspy

Tobias Megies

commit sha 86849cecc5535ebf53dd39d8d89e6f11129832e4

testing: mpl backend switch context manager: add close option

view details

Lion Krischer

commit sha 56c4b6009a9073dd47bcd9074315692a6cc4e68e

Merge pull request #2464 from obspy/mpl_backend_context_closing testing: mpl backend switch context manager: add close option

view details

push time in 4 months

PR merged obspy/obspy

testing: mpl backend switch context manager: add close option testing

What does this PR do?

Add an option to close all figures when exiting MatplotlibBackend helper context manager, which is used to temporarily switch the backend.

Why was it initiated? Any relevant Issues?

Please fill in

PR Checklist

  • [ ] Correct base branch selected? master for new features, maintenance_... for bug fixes
  • [ ] This PR is not directly related to an existing issue (which has no PR yet).
  • [ ] If the PR is making changes to documentation, docs pages can be built automatically. Just remove the space in the following string after the + sign: "+ DOCS"
  • [ ] If any network modules should be tested for the PR, add them as a comma separated list (e.g. clients.fdsn,clients.arclink) after the colon in the following magic string: "+TESTS:" (you can also add "ALL" to just simply run all tests across all modules)
  • [ ] All tests still pass.
  • [ ] Any new features or fixed regressions are be covered via new tests.
  • [ ] Any new or changed features have are fully documented.
  • [ ] Significant changes have been added to CHANGELOG.txt .
  • [ ] First time contributors have added your name to CONTRIBUTORS.txt .
+8 -1

1 comment

1 changed file

megies

pr closed time in 4 months

pull request commentobspy/obspy

testing: mpl backend switch context manager: add close option

Can't hurt.

megies

comment created time in 4 months

issue commentopenjournals/joss-reviews

[PRE REVIEW]: Underworld2: Python Geodynamics Modelling for Desktop, HPC and Cloud

Hi @leouieda,

thanks for the consideration and while I could review the Python part I'm not familiar with geodynamic simulations at all so I don't feel qualified for this particular review.

I think @gassmoeller for example would be much more qualified here.

whedon

comment created time in 4 months

Pull request review commentobspy/obspy

add Stream.stack method

 def remove_sensitivity(self, *args, **kwargs):             tr.remove_sensitivity(*args, **kwargs)         return self +    def stack(self, group='all', type='linear', npts_tol=0):+        """+        Return stream with traces stacked by the same selected metadata.++        :param str group: Stack waveforms together which have the same metadata+            given by this parameter. The parameter should name the+            corresponding keys of the stats object,+            e.g. ``'{network}.{station}'`` for stacking all+            locations and channels of the stations and returning a stream+            consisting of one stacked trace for each station.+            This parameter can take two special values,+            ``'id'`` which stacks the waveforms by SEED id and+            ``'all'`` (default) which stacks together all traces in the stream.+        :type type: str or tuple+        :param type: Type of stack, one of the following:+            ``'linear'``: average stack (default),+            ``('pw', order)``: phase weighted stack of given order,+            see [Schimmel1997]_,+            ``('root', order)``: root stack of given order.+        :param int npts_tol: Tolerate traces with different number of points+            with a difference up to this value. Surplus samples are discarded.++        :returns: New stream object with stacked traces. The metadata of each+            trace (inlcuding starttime) corresponds to the metadata of the+            original traces if those are the same. Additionaly, the entries+            ``stack`` (result of the format operation on the group parameter)+            and ``stack_count`` (number of stacked traces)+            are written to the stats object(s).++        >>> from obspy import read+        >>> st = read()+        >>> stack = st.stack()+        >>> print(stack)  # doctest: +ELLIPSIS+        1 Trace(s) in Stream:+        BW.RJOB.. | 2009-08-24T00:20:03.000000Z - ... | 100.0 Hz, 3000 samples+        """+        if group == 'id':+            group = '{network}.{station}.{location}.{channel}'+        groups = collections.defaultdict(list)+        for tr in self:+            groups[group.format(**tr.stats)].append(tr)+        stacks = []+        for groupid, traces in groups.items():+            header = {k: v for k, v in traces[0].stats.items()+                      if all(tr.stats.get(k) == v for tr in traces)}

I think the function should raise if the sampling rate is not set after this line - otherwise the Trace constructor will set it to 1.0 which is likely wrong and will be confusing for people.

This line IHMO already works really well for the start time - it will either be preserved if stacking different channels/stations that happen to have the same start time or be set to timestamp 0 if stacking multiple time periods. You could consider to allow for a small tolerance for which the start time would still be preserved but that might be overkill.

trichter

comment created time in 4 months

Pull request review commentobspy/obspy

add Stream.stack method

 def remove_sensitivity(self, *args, **kwargs):             tr.remove_sensitivity(*args, **kwargs)         return self +    def stack(self, group='all', type='linear', npts_tol=0):+        """+        Return stream with traces stacked by the same selected metadata.++        :param group: Stack waveforms together which have the same metadata+            given by this parameter. The parameter should name the+            corresponding keys of the stats object,+            e.g. `'{network}.{station}'` for stacking all+            locations and channels of the stations and returning a stream+            consisting of one stacked trace for each station.+            This parameter can take two special values,+            `'seedid'` which stacks the waveforms by seedid and+            `'all'` (default) which stacks together all traces in the stream.+        :param type: Type of stack, one of the following:+            `'linear'`: average stack (default),+            `('pw', order)`: phase weighted stack of given order,+            see [Schimmel1997]_,+            `('root', order)`: root stack of given order.+        :param npts_tol: Tolerate traces with different number of points+            with a difference up to this value. Surplus samples are discarded.++        :returns: New stream object with stacked traces. The metadata of each

I see two solutions here:

(1) Actually change the existing stream object in-place. This is what a few other method, e.g. .merge() already do and it would fit right in with most other stream methods. (2) Rename the method to .create_stack() which would return a new object. Then its IHMO a bit clearer that this does not modify in-place.

I slightly tend towards the first choice but I'm fine with either.

trichter

comment created time in 4 months

Pull request review commentobspy/obspy

add Stream.stack method

 def remove_sensitivity(self, *args, **kwargs):             tr.remove_sensitivity(*args, **kwargs)         return self +    def stack(self, group='all', type='linear', npts_tol=0):+        """+        Return stream with traces stacked by the same selected metadata.++        :param str group: Stack waveforms together which have the same metadata+            given by this parameter. The parameter should name the+            corresponding keys of the stats object,+            e.g. ``'{network}.{station}'`` for stacking all+            locations and channels of the stations and returning a stream+            consisting of one stacked trace for each station.+            This parameter can take two special values,+            ``'id'`` which stacks the waveforms by SEED id and+            ``'all'`` (default) which stacks together all traces in the stream.+        :type type: str or tuple+        :param type: Type of stack, one of the following:+            ``'linear'``: average stack (default),+            ``('pw', order)``: phase weighted stack of given order,+            see [Schimmel1997]_,+            ``('root', order)``: root stack of given order.+        :param int npts_tol: Tolerate traces with different number of points+            with a difference up to this value. Surplus samples are discarded.++        :returns: New stream object with stacked traces. The metadata of each+            trace (inlcuding starttime) corresponds to the metadata of the+            original traces if those are the same. Additionaly, the entries+            ``stack`` (result of the format operation on the group parameter)+            and ``stack_count`` (number of stacked traces)+            are written to the stats object(s).++        >>> from obspy import read+        >>> st = read()+        >>> stack = st.stack()+        >>> print(stack)  # doctest: +ELLIPSIS+        1 Trace(s) in Stream:+        BW.RJOB.. | 2009-08-24T00:20:03.000000Z - ... | 100.0 Hz, 3000 samples+        """+        if group == 'id':+            group = '{network}.{station}.{location}.{channel}'+        groups = collections.defaultdict(list)+        for tr in self:+            groups[group.format(**tr.stats)].append(tr)+        stacks = []+        for groupid, traces in groups.items():+            header = {k: v for k, v in traces[0].stats.items()+                      if all(tr.stats.get(k) == v for tr in traces)}+            header['stack'] = groupid+            header['stack_count'] = len(traces)

We don't really have a good concept for this type of meta-data for the stack so I also don't really know what's the best way here. But maybe move these two to a new stack group in the stats dictionary? Really just a suggestion though.

trichter

comment created time in 4 months

Pull request review commentobspy/obspy

add Stream.stack method

 def remove_sensitivity(self, *args, **kwargs):             tr.remove_sensitivity(*args, **kwargs)         return self +    def stack(self, group='all', type='linear', npts_tol=0):

Could you rename type to stack_type to not clash with the reserved type word?

trichter

comment created time in 4 months

Pull request review commentobspy/obspy

add Stream.stack method

 def remove_sensitivity(self, *args, **kwargs):             tr.remove_sensitivity(*args, **kwargs)         return self +    def stack(self, group='all', type='linear', npts_tol=0):+        """+        Return stream with traces stacked by the same selected metadata.++        :param str group: Stack waveforms together which have the same metadata
        :param group: Stack waveforms together which have the same metadata
trichter

comment created time in 4 months

Pull request review commentobspy/obspy

add Stream.stack method

 def remove_sensitivity(self, *args, **kwargs):             tr.remove_sensitivity(*args, **kwargs)         return self +    def stack(self, group='all', type='linear', npts_tol=0):

Also maybe rename group to group_by which is what most other packages like pandas would probably call an operation like this.

trichter

comment created time in 4 months

push eventamkearns-usgs/obspy

Lion Krischer

commit sha eb03a85888b6057ea81e24102c30837c75d7a979

make flake8 happy.

view details

push time in 4 months

push eventamkearns-usgs/obspy

Lion Krischer

commit sha b28c9a5dc45fafbaa988b9cb84887d6a67abfe76

Changelog.

view details

push time in 4 months

pull request commentobspy/obspy

Create response files from Poles and Zeros lists

Sorry for the crazy slow response on this one. I rebased on the latest master and changed the static method to a class method. Will merge this if CI passes as especially the evalresp comparision test IHMO looks really trustworthy.

amkearns-usgs

comment created time in 4 months

push eventamkearns-usgs/obspy

derrick

commit sha e6f320a50d8eb6293af5b66ce27367b5ce17f3f5

deprecate resource_id mutative methods

view details

derrick

commit sha 0e65896bb0fa1a7b9783867d68d467ecdb6fde9f

add tests

view details

derrick

commit sha 5f4eb69af4eb908d38b9d41d89a46cab3e079825

flake8

view details

Derrick Chambers

commit sha d5af46434e240817684fc231656317b9d5059dbf

changelog entry

view details

Lion Krischer

commit sha e1d782be0e69617102edb009545ea50f5db720d1

Using sphinx references.

view details

Lion Krischer

commit sha 6bef0cb999eafe5b0b66e6d8b3583fca6fce6ffb

Fixing typos.

view details

Tobias Megies

commit sha 4567a1f86c3e84d00d9bb943bda58ca55ff16391

eida fdsn token auth: make it possible to use signed tokens

view details

Tobias Megies

commit sha 21d8c49fa32ad5012ca2c2ce621a887c3b7d9b5e

eida fdsn token auth: make it possible to skip token validation entirely

view details

Lion Krischer

commit sha e8591482fd6ddbf4d8f51e8f00ff8a472c02b9e5

Merge pull request #2359 from obspy/eida_signed_tokens Enable using signed EIDA FDSN tokens

view details

Tobias Megies

commit sha cb7d4b0c80649326a0251243f09bfcab582000ba

scan: add test for SEED ID selection in plotting with fnmatch wildcards

view details

Tobias Megies

commit sha dabfe4b1e79e9acdf023d7518e447da498942fe4

deb: update quilt patches

view details

Tobias Megies

commit sha ba61aad106d79b9982d44bc0e4b5583a2e6ec7da

obspy-scan: allow fnmatch wildcards when selecting specific SEED IDs during plotting so one can e.g. select all "[EH]H*" channels for a plot of all scanned data

view details

trichter

commit sha 2741098cdf35d8e17321b373aeeeec30e8a219e1

add xcorr_detector function

view details

trichter

commit sha 1c71f48e9ed146350a29db67c69264520b803e0c

try to make travis pass on py3.4

view details

trichter

commit sha e1264a1c13bf7ca6d0e099a280bd8589b8cf72c4

implement review suggestions, split xcorr_detector into two functions

view details

trichter

commit sha e6a2db7a5a012f51e32727dc23618c4bd6881a4c

remove condition argument, is not necessary anymore, rework plotting

view details

trichter

commit sha 1bcaa43134bbdc40c5aa8791c6a1914cef99338c

test if xcorr stream is suitable for coincidence_trigger

view details

trichter

commit sha 72607cbd81ae84dc01ed2d95b0b37ebe3d25367f

fix doc, add cross references

view details

trichter

commit sha e34ff9b05e1ebe8a458df022b1e26e6f1c88b0e5

add detector tutorial

view details

trichter

commit sha ffa4f9ff5427b2141bc46d3c49633106667127dc

add changelog

view details

push time in 4 months

push eventkrischer/seismo_live

Lion Krischer

commit sha 227e6d97c8c5207dafc70c7d62fcc9f398687e0d

Needs to build obspy from source.

view details

push time in 4 months

push eventkrischer/seismo_live

Lion Krischer

commit sha b22df2e059a49630ad82c110fc59817080cbf8f6

installing obspy.

view details

push time in 4 months

push eventkrischer/seismo_live

Lion Krischer

commit sha d23a8f2f656c259b44f824822b58cea3f2554363

Try to install jupyter instead of jupyter_client.

view details

push time in 4 months

push eventkrischer/seismo_live

Lion Krischer

commit sha cb73f561248010ef2facd5e28b3314425ee1d049

Install jupyter client.

view details

push time in 4 months

push eventkrischer/seismo_live

Lion Krischer

commit sha 5654f383219f869cdbe862e6d857905ab1f8f90c

Setting python version in azure pipelines.

view details

push time in 4 months

issue openedkrischer/seismo_live

Roadmap

  • [ ] Finish #33 (@sebnoe @krischer)
  • [ ] Docker base image (@megies)
  • [ ] Auto-generate index page (@krischer)
  • [ ] Public relations and 💵 (@heinerigel)

created time in 4 months

push eventkrischer/seismo_live

Lion Krischer

commit sha d9df5a21982fe95559c875ec8fea697647d91570

Modifying azure pipelines.

view details

push time in 4 months

push eventkrischer/seismo_live

Lion Krischer

commit sha 332ae3598daf08842874db8569a19ac9b2ef1a0f

Updating script.

view details

Lion Krischer

commit sha f69d5cff38f0acdf3e1c021261b50e6255c40903

Adding example azure pipelines.

view details

push time in 4 months

PR opened krischer/seismo_live

Convert everything to Jupytext

Also the beginnings of a new workflow how to do things.

+25957 -124852

0 comment

230 changed files

pr created time in 4 months

create barnchkrischer/seismo_live

branch : jupytext

created branch time in 4 months

MemberEvent

push eventobspy/docs

Lion Krischer

commit sha 4131d8fbc02ae3a498f367930cd0426aff04db1b

Add 2018 ObsPy poster.

view details

push time in 4 months

more