profile
viewpoint

krischer/django-plugins 31

A Simple Plugin Framework for Django

barsch/seishub.core 19

SeisHub - a seismological XML/SQL database hybrid

echolite/ses3d 7

Spectral-Elements 3D

barsch/seishub.plugins.seismology 5

Seismology package for SeisHub.

krischer/awesome-python 4

A curated list of awesome Python frameworks, libraries, software and resources

barsch/seishub.plugins.exupery 3

Exupery package for SeisHub

iris-edu/mseed3-evaluation 3

A repository for technical evaluation and implementation of potential next generation miniSEED formats

pull request commentobspy/obspy

Introduce fine-grained FDSN client exceptions

Instead of relying on webservices to return a certain status (which might change in the future) it would be easier and more stable to mock out the response to trigger all the code paths in the tests. A simplistic way of doing this would be to just mock out the download_url() function: https://github.com/obspy/obspy/blob/master/obspy/clients/fdsn/client.py#L1751

Please have a look at the mock library here in case you are not familiar with it: https://docs.python.org/3/library/unittest.mock.html

yetinam

comment created time in 3 days

issue commentobspy/obspy

PPSD: Need to adjust to `set_clim` API change in matplotlib 3.1-3.3

Do you want to do this for 1.2.2? Might be a good idea but also not super urgent.

megies

comment created time in 10 days

pull request commentobspy/obspy

Backport numpy distutils fix

I'd agree - I think we can just tag 1.2.2 immediately and then push to pypi - the other packages can come later but then the issue is resolved for most people.

krischer

comment created time in 10 days

push eventobspy/obspy

Damian Kula

commit sha b25ea427c5448c146a586493580e9c4b35fee82c

Imports DistutilsSetupError directly from distutils instead from numpy Signed-off-by: Damian Kula <dkula@unistra.fr>

view details

Lion Krischer

commit sha aec497b39705cc5f8ba29d7be47ec71267ca1021

Merge pull request #2647 from obspy/krischer/backport-numpy-distutils-fix Backport numpy distutils fix

view details

push time in 10 days

PR merged obspy/obspy

Backport numpy distutils fix

This is a backport of https://github.com/obspy/obspy/pull/2643 to the maintenance branch.

Installation of ObsPy with the latest numpy is currently not possible on platform without wheels. I think this warrants a new minor ObsPy release.

+2 -1

1 comment

1 changed file

krischer

pr closed time in 10 days

PR merged krischer/instaseis

Clarify direction of r, theta, phi unit vectors

In trying to create a ForceSource object, I was unclear of the direction of the r, theta, and phi hat vectors (mainly the latter two). I found a definition in Nissen-Meyer et al. (2008), Figure 1:

Screen Shot 2020-06-16 at 4 00 25 PM

This suggests that — relative to someone standing on Earth's surface — r is positive upwards, theta is positive to the south, and phi is positive to the east. Is this the correct definition? If so, I would kindly suggest including a reference to the paper (as implemented in this PR) for some clarity.

Thanks!

(The RST formatting might be wrong here, I was just guessing.)

+4 -0

2 comments

1 changed file

liamtoney

pr closed time in 12 days

pull request commentkrischer/instaseis

Clarify direction of r, theta, phi unit vectors

Thanks a bunch!

liamtoney

comment created time in 12 days

push eventkrischer/instaseis

Liam Toney

commit sha 753e60556c2e4503ee78af055869f160469af896

Clarify direction of r, theta, phi unit vectors

view details

Liam Toney

commit sha 8524446fed94c184c159e95f74b883160bd3dccc

Remove clarification from source.py

view details

Liam Toney

commit sha e26b1bf74f7b2ce1eb69df0c06511425f15c5112

Add direction clarification to note

view details

Liam Toney

commit sha 7922ee40c42dd0251befa53a91acdeec56de6f1e

Fix typo

view details

Lion Krischer

commit sha 7a68ee334aee45802e497c240428ef00dd6b3de7

Merge pull request #74 from liamtoney/patch-1 Clarify direction of r, theta, phi unit vectors

view details

push time in 12 days

issue closedobspy/obspy

Linux and OSX wheels are missing on pypi

I noticed this while looking at https://github.com/obspy/obspy/pull/2647

For some reason they are no longer available: https://pypi.org/project/obspy/1.2.1/#files

They definitely used to be available because I had them in my cache:

Screenshot 2020-06-24 at 09 19 03

Deleting this cached file results in me no longer being able to install ObsPy with pip due to the aforementioned issue.

Does anyone know what is going on?

closed time in 12 days

krischer

issue commentobspy/obspy

Linux and OSX wheels are missing on pypi

Apparently there never were wheels for 1.2.1 for Linux and OSX - I think I just had a local one because pip created it the first time it installed it.

krischer

comment created time in 12 days

issue openedobspy/obspy

Linux and OSX wheels are missing on pypi

I noticed this while looking at https://github.com/obspy/obspy/pull/2647

For some reason they are no longer available: https://pypi.org/project/obspy/1.2.1/#files

They definitely used to be available because I had them in my cache:

Screenshot 2020-06-24 at 09 19 03

Deleting this cached file results in me no longer being able to install ObsPy with pip due to the aforementioned issue.

Does anyone know what is going on?

created time in 12 days

PR opened obspy/obspy

Backport numpy distutils fix

This is a backport of https://github.com/obspy/obspy/pull/2643 to the maintenance branch.

Installation of ObsPy with the latest numpy is currently not possible on platform without wheels. I think this warrants a new minor ObsPy release.

+2 -1

0 comment

1 changed file

pr created time in 12 days

create barnchobspy/obspy

branch : krischer/backport-numpy-distutils-fix

created branch time in 12 days

pull request commentkrischer/instaseis

Clarify direction of r, theta, phi unit vectors

r, theta, and phi refer to the standard spherical coordinate system that to my understanding is in use in most of physics (up to the point of it being an official ISO standard). It is the same coordinate system that is also used for moment tensor sources.

I like the idea of adding an explanation. Could you please make two small changes then I'd be happy to merge it:

  • Instead of the link to the paper directly spell out what r, theta, and phi are (saves the users a click) and maybe link to the wikipedia page on spherical coordinates systems: https://en.wikipedia.org/wiki/Spherical_coordinate_system
  • Could you move it to the top of the file - there is already a box in the top most comment which currently is rendered at the beginning of this page: https://instaseis.net/source.html That way it would be clear that it applies to everything on the page.
liamtoney

comment created time in 19 days

issue commentkrischer/LASIF

RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject

I think the errors you see are due to an incorrect numpy installation. Trying to reinstall these might rectify that particular error.

That being said LASIF does not support Python 3 and currently only works with Python 2.7. I'd be happy to accept a PR that ports it to Python 3 but currently I have no plans to do it myself.

There is a fork of LASIF that works with Python 3 (https://github.com/dirkphilip/LASIF_2.0) that is actively being worked on. Scope and features are a bit different and that's why they are separate repositories. I don't know what you want to do to I cannot give any further advice.

Cheers!

ZQiwen

comment created time in 19 days

issue commentSeismicData/pyasdf

How to make pyasdf to support multi-date-range station inventory + custom tags?

If you want to install it on Python 3.6 for now it should be a simple case of removing the following line:

https://github.com/SeismicData/pyasdf/blob/master/setup.py#L163

I'll no longer officially support it to keep the maintenance burden down and also because the next ObsPy version will be Python >= 3.7 only. Future pyasdf versions will also freely use Python 3.7 features.

That being said I'd be willing to merge a pull request that adds Python 3.6 support again with the understanding that no effort will expended to retain it in the future.

zhang01GA

comment created time in 20 days

PR merged krischer/instaseis

Fix typo in index.rst

I think this is the intended meaning(?)

+2 -2

1 comment

1 changed file

liamtoney

pr closed time in 20 days

push eventkrischer/instaseis

Liam Toney

commit sha 60dad1346b4f2601421d7711195fc04ebdc24be8

Fix typo in index.rst I think this is the intended meaning(?)

view details

Lion Krischer

commit sha 88bb6181b499fbd321ab24f36928641bc1a7594b

Merge pull request #73 from liamtoney/patch-1 Fix typo in index.rst

view details

push time in 20 days

pull request commentkrischer/instaseis

Fix typo in index.rst

Yes definitely! Thanks a bunch!

liamtoney

comment created time in 20 days

push eventconda-forge/pyasdf-feedstock

Lion Krischer

commit sha 06603dd478a1fa35b4be071a41ddf96243ac428d

Updating to 0.7.1

view details

Lion Krischer

commit sha 823d9f5f0f732f06c799cc0df3a7f9739afa7b28

Merge pull request #14 from krischer/0.7.1 Updating to 0.7.1

view details

push time in 20 days

issue commentSeismicData/pyasdf

How to make pyasdf to support multi-date-range station inventory + custom tags?

You are right - it was actually written to the StationXML files with pyasdf 0.7.0 but not in a way that could be easily accessed with ObsPy. This has been rectified with version 0.7.1 and I also added a full roundtripping integration test so it should work now!

zhang01GA

comment created time in 20 days

PR opened conda-forge/pyasdf-feedstock

Updating to 0.7.1

<!-- Thank you for pull request. Below are a few things we ask you kindly to self-check before getting a review. Remove checks that are not relevant. --> Checklist

  • [ ] Used a fork of the feedstock to propose changes
  • [ ] Bumped the build number (if the version is unchanged)
  • [ ] Reset the build number to 0 (if the version changed)
  • [ ] Re-rendered with the latest conda-smithy (Use the phrase <code>@<space/>conda-forge-admin, please rerender</code> in a comment in this PR for automated rerendering)
  • [ ] Ensured the license file is being packaged.

<!-- Please note any issues this fixes using closing keywords: -->

<!-- Please add any other relevant info below: -->

+2 -2

0 comment

1 changed file

pr created time in 20 days

push eventSeismicData/pyasdf

Lion Krischer

commit sha 24dc72621088c948417be66f22e6a8d6dccf415f

Trying to fix the formatting of the readme.

view details

push time in 21 days

create barnchkrischer/pyasdf-feedstock

branch : 0.7.1

created branch time in 21 days

push eventkrischer/pyasdf-feedstock

Lion Krischer

commit sha 97e57e1f4f4e4f64257f99452ea65b9757c8e4ae

Update to 0.7.0

view details

Lion Krischer

commit sha 8f58dad1affde7f103e37ea8eae6d6b01b31f513

Merge pull request #13 from krischer/0.7.0 Update to 0.7.0

view details

push time in 21 days

push eventSeismicData/pyasdf

Travis

commit sha 352bf19fdbebad320cd4d33d370ed18f70fe2a8e

Travis build 195 pushed to gh-pages

view details

push time in 21 days

created tagSeismicData/pyasdf

tag0.7.1

Python Interface to ASDF based on ObsPy

created time in 21 days

push eventSeismicData/pyasdf

Lion Krischer

commit sha c6618cf3b2049ffa04f9b2a9232a9284ff3fa4d0

Make sure namespace maps are properly written to StationXML files.

view details

push time in 21 days

issue commentSeismicData/pyasdf

Calling parallel process() for a very large file gets stuck

That would be great - thanks!

I don't really understand why the inventory is printed in the above output, otherwise - if sorted by worker - it is fine (in the current pyasdf implementation rank 0 is always the MASTER and all workers [all other ranks] can request new stations to process:

Worker 1

WORKER 1 sent to MASTER [WORKER_REQUESTS_ITEM] -- None
MASTER received from WORKER 1 [WORKER_REQUESTS_ITEM] -- None
MASTER sent to WORKER 1 [MASTER_SENDS_ITEM] -- ('1A.NE03', 'synthetic')
WORKER 1 received from MASTER [MASTER_SENDS_ITEM] -- ('1A.NE03', 'synthetic')

Worker 2

MASTER received from WORKER 2 [WORKER_REQUESTS_ITEM] -- None
WORKER 2 sent to MASTER [WORKER_REQUESTS_ITEM] -- None
MASTER sent to WORKER 2 [MASTER_SENDS_ITEM] -- ('1A.CORRE', 'synthetic')
WORKER 2 received from MASTER [MASTER_SENDS_ITEM] -- ('1A.CORRE', 'synthetic')

WORKER 2 sent to MASTER [WORKER_REQUESTS_ITEM] -- None
MASTER received from WORKER 2 [WORKER_REQUESTS_ITEM] -- None
MASTER sent to WORKER 2 [MASTER_SENDS_ITEM] -- ('1A.NE04', 'synthetic')
WORKER 2 received from MASTER [MASTER_SENDS_ITEM] -- ('1A.NE04', 'synthetic')

Worker 3

WORKER 3 sent to MASTER [WORKER_REQUESTS_ITEM] -- None
MASTER received from WORKER 3 [WORKER_REQUESTS_ITEM] -- None
MASTER sent to WORKER 3 [MASTER_SENDS_ITEM] -- ('1A.NE00', 'synthetic')
WORKER 3 received from MASTER [MASTER_SENDS_ITEM] -- ('1A.NE00', 'synthetic')

Worker 4

WORKER 4 sent to MASTER [WORKER_REQUESTS_ITEM] -- None
MASTER received from WORKER 4 [WORKER_REQUESTS_ITEM] -- None
MASTER sent to WORKER 4 [MASTER_SENDS_ITEM] -- ('1A.NE01', 'synthetic')
WORKER 4 received from MASTER [MASTER_SENDS_ITEM] -- ('1A.NE01', 'synthetic')

Worker 5

WORKER 5 sent to MASTER [WORKER_REQUESTS_ITEM] -- None
MASTER received from WORKER 5 [WORKER_REQUESTS_ITEM] -- None
MASTER sent to WORKER 5 [MASTER_SENDS_ITEM] -- ('1A.NE02', 'synthetic')
WORKER 5 received from MASTER [MASTER_SENDS_ITEM] -- ('1A.NE02', 'synthetic')
icui

comment created time in 21 days

delete branch krischer/pyasdf-feedstock

delete branch : 0.7.0

delete time in 21 days

push eventconda-forge/pyasdf-feedstock

Lion Krischer

commit sha 97e57e1f4f4e4f64257f99452ea65b9757c8e4ae

Update to 0.7.0

view details

Lion Krischer

commit sha 8f58dad1affde7f103e37ea8eae6d6b01b31f513

Merge pull request #13 from krischer/0.7.0 Update to 0.7.0

view details

push time in 21 days

PR merged conda-forge/pyasdf-feedstock

Update to 0.7.0

Update to 0.7.0

+2 -2

1 comment

1 changed file

krischer

pr closed time in 21 days

issue commentSeismicData/pyasdf

Calling parallel process() for a very large file gets stuck

Is that the full output? Unfortunately it does not provide enough information for me to understand what is going on and at that scale I cannot easily reproduce it. It would be great if you could investigate the problem a bit and tell me what you find but I understand that this is not easy.

What should also work is to just disable MPI for pyasdf - this would fall back to using the multiprocessing module of Python. It should scale decently well to a full node and it uses a round robin method to organize actually writing to the file on disc.

To do this simply create the ASDFDataSet object with pyasdf.ASDFDataSet(..., mpi=False). Then in your batch submission file (I don't know which queuing system Summit uses) make sure to submit the job in a way that it does not use mpiexec/mpirun (or if you use it use a single rank) but still make sure that it has access to a full node. pyasdf should then just take all cores of a node and process the data with it. This will not work for more than one node!

icui

comment created time in 21 days

issue closedSeismicData/pyasdf

adding an existing StationXML throws TypeError

https://github.com/SeismicData/pyasdf/blob/8431d2d185ab3b3c94b0f82bccf3f79fc3769456/pyasdf/inventory_utils.py#L144

Trying to run set() on a list of Comments throws a TypeError: unhashable type: 'Comment'

I encountered this when trying to add a StationXML with a list of comments that was already contained in the dataset.

It would be nice if it threw an ASDFWarning, similar to when adding waveform data already contained in the dataset.

closed time in 21 days

bch0w

issue commentSeismicData/pyasdf

adding an existing StationXML throws TypeError

Hi @bch0w

the just released pyasdf 0.7.0 is a bit more conservative when merging inventory objects which should also fix that particular issue. Please let me know if that is not the case.

Cheers!

bch0w

comment created time in 21 days

issue closedSeismicData/pyasdf

How to make pyasdf to support multi-date-range station inventory + custom tags?

When I add a station XML ds.add_stationxml("multi-date-ranges.xml") See attached xml file,

OA.CE22_station_inv_modified_xml.txt

then extracted the station xml file as shown below: There are two issues:

  1. the multidate-ranges are merged into one xml node for station code="CE22".
  2. our custom tagged metadata (allowed by obspy and FDSN) are lost.

Can you help by comments on these issues. Thank you!

<?xml version='1.0' encoding='UTF-8'?> <FDSNStationXML xmlns="http://www.fdsn.org/xml/station/1" schemaVersion="1.0"> <Source>Geoscience Australia</Source> <Module>ObsPy 1.0.2</Module> <ModuleURI>https://www.obspy.org</ModuleURI> <Created>2019-02-02T18:42:45</Created> <Network code="OA" startDate="2017-09-11T00:00:36" endDate="2018-11-28T23:06:20"> <SelectedNumberStations>1</SelectedNumberStations> <Station code="CE22" startDate="2017-11-04T03:16:35" endDate="2018-11-18T20:23:20"> <Latitude unit="DEGREES">-18.49507</Latitude> <Longitude unit="DEGREES">139.002731</Longitude> <Elevation unit="METERS">62.7</Elevation> <Site> <Name>CE22</Name> </Site> <Vault>Transportable Array</Vault> <CreationDate>2017-11-04T03:16:35</CreationDate> <TerminationDate>2018-06-06T01:02:24</TerminationDate> <SelectedNumberChannels>3</SelectedNumberChannels> <Channel code="HHZ" locationCode="0M"> <Latitude unit="DEGREES">-18.49507</Latitude> <Longitude unit="DEGREES">139.002731</Longitude> <Elevation unit="METERS">62.7</Elevation> <Depth unit="METERS">0.0</Depth> <Azimuth unit="DEGREES">0.0</Azimuth> <Dip unit="DEGREES">90.0</Dip> <SampleRate unit="SAMPLES/S">200.0</SampleRate> <ClockDrift unit="SECONDS/SAMPLE">0.0</ClockDrift> </Channel> <Channel code="HHE" locationCode="0M"> <Latitude unit="DEGREES">-18.49507</Latitude> <Longitude unit="DEGREES">139.002731</Longitude> <Elevation unit="METERS">62.7</Elevation> <Depth unit="METERS">0.0</Depth> <Azimuth unit="DEGREES">90.0</Azimuth> <Dip unit="DEGREES">0.0</Dip> <SampleRate unit="SAMPLES/S">200.0</SampleRate> <ClockDrift unit="SECONDS/SAMPLE">0.0</ClockDrift> </Channel> <Channel code="HHN" locationCode="0M"> <Latitude unit="DEGREES">-18.49507</Latitude> <Longitude unit="DEGREES">139.002731</Longitude> <Elevation unit="METERS">62.7</Elevation> <Depth unit="METERS">0.0</Depth> <Azimuth unit="DEGREES">0.0</Azimuth> <Dip unit="DEGREES">0.0</Dip> <SampleRate unit="SAMPLES/S">200.0</SampleRate> <ClockDrift unit="SECONDS/SAMPLE">0.0</ClockDrift> </Channel> </Station> </Network> </FDSNStationXML>

closed time in 21 days

zhang01GA

issue commentSeismicData/pyasdf

How to make pyasdf to support multi-date-range station inventory + custom tags?

Good catch - I never really considered this. I just released pyasdf 0.7.0 which is more conservative when merging stations in inventory objects. I think this is a good idea in any case given that there is no good solution to this problem in general. In your particular case it means that both your problems should be resolved. Please let me know if that is not the case!

zhang01GA

comment created time in 21 days

PR opened conda-forge/pyasdf-feedstock

Update to 0.7.0

Update to 0.7.0

+2 -2

0 comment

1 changed file

pr created time in 21 days

create barnchkrischer/pyasdf-feedstock

branch : 0.7.0

created branch time in 21 days

push eventkrischer/pyasdf-feedstock

Lion Krischer

commit sha 18bedbfa7cc6e3532c0b56f07c5816ec235b6e9b

updating to 0.6.1

view details

Lion Krischer

commit sha e53ae8aaa1e1e0a2843ac8e7887f5f8147390316

Merge pull request #12 from krischer/0.6.1 Updating to 0.6.1

view details

conda-forge-admin

commit sha e3635b58095cbd26907eb97f5bec097cc27e8167

[ci skip] [skip ci] [cf admin skip] ***NO_CI*** admin migration CFEP13TokensAndConfig

view details

Matthew R Becker

commit sha 0dc2b1b7a55fccb2afa1a78ba9ac2bf843e0eb25

[ci skip] [skip ci] [cf admin skip] ***NO_CI*** admin migration CondaForgeAutomerge

view details

push time in 21 days

push eventSeismicData/pyasdf

Travis

commit sha 8039552a4704ccbf25e0ea81de577a84843e520d

Travis build 194 pushed to gh-pages

view details

push time in 21 days

push eventSeismicData/pyasdf

Lion Krischer

commit sha 8a516212d16963fe73a7bc8e926de66e246cbbd9

Force upgrade of all packages in travis.

view details

push time in 21 days

push eventSeismicData/pyasdf

Lion Krischer

commit sha d762f93e6ee2d8c85324b8c74a6569135c6d0683

Extra test for retaining information from other namespaces.

view details

Lion Krischer

commit sha 3b503036c0da6ad7722fd83550225a9744d3f4dc

Add failing test for different station epochs.

view details

Lion Krischer

commit sha 82d26d25ac6b25619aff4ac529ed62fa8cc39732

More conservative station merging behaviour.

view details

Lion Krischer

commit sha 0ce0e8131abff3bb9d099263df66d22eca141150

Version 0.7.0

view details

push time in 21 days

created tagSeismicData/pyasdf

tag0.7.0

Python Interface to ASDF based on ObsPy

created time in 21 days

issue commentobspy/obspy

`remove_response` very slow for 24 hours of data - bottleneck in filter-design?

@flixha You are correct in that it is a ridiculous overkill to compute the response at every single frequency. Evalresp indeed includes some functionality to compute it on less samples and then interpolate with splines to all frequencies. We never wrapped this but probably should have.

This is a lot simpler to implement in #2592 and thus I would suggest to do it there. I would assume that computing the actual response on maybe 1000 samples is enough (one would have to test of course). I assume that then the FFT will become the limiting factor but there is only so much one can do there although some kind of sliding window approach might also help here

flixha

comment created time in a month

Pull request review commentobspy/obspy

signal: first try at PAZ fitting from FAP info

 def estimate_wood_anderson_amplitude_using_response(response, amplitude,     return wa_ampl  +def _paz_to_freq_resp(freqs, zeros, poles, scale_fac):+    b, a = scipy.signal.ltisys.zpk2tf(zeros, poles, scale_fac)+    if not isinstance(a, np.ndarray) and a == 1.0:+        a = [1.0]+    return scipy.signal.freqs(b, a, freqs * 2 * np.pi)[1]+++def _unpack_paz(x, numpoles, numzeros):+    poles_real = np.array(x[:numpoles * 2:2])+    poles_imag = np.array(x[1:numpoles * 2:2])+    poles = poles_real + 1j * poles_imag+    zeros_real = np.array(x[numzeros * 2:numzeros * 2 + numzeros * 2:2])+    zeros_imag = np.array(x[numzeros * 2 + 1:numzeros * 2 + numzeros * 2:2])+    zeros = zeros_real + 1j * zeros_imag+    scale_fac = x[-1]+    return poles, zeros, scale_fac+++def _response_misfit(reference, other):+    """+    Calculate a misfit measure from two complex response vectors+    """+    # just the simple euclidean distance for now. tweaking this might improve+    # the fitting maybe..+    misfit = scipy.spatial.distance.euclidean(reference, other)

Regularizing can have the same effect and would be less hacky I think. Adding some kind of term to the misfit that promotes smoothness of the solution might work.

megies

comment created time in a month

pull request commentobspy/obspy

signal: first try at PAZ fitting from FAP info

We tend to force the sign of the poles. However, we are starting with an initial model and then perturbing that. Do you think adding an optional initial_guess would help?

There is a simple way to add this to the current proposed implementation: A few optimization methods (e.g. L-BFGS-B) in scipy allow specifying bounds for all parameters. That would solve that and it would probably only converge marginally slower than the current full BFGS method.

Regarding the initial guess one could have a few known curves for each number of poles and zeros, scale them to the correct amplitude, and use them as initial guess. Most response curves are somewhat similar in the end and these could make it much easier to find sensible gradients for the optimization. But I am just guessing as I have never tried this.

megies

comment created time in a month

pull request commentobspy/obspy

signal: first try at PAZ fitting from FAP info

@krischer for now the loop over number of poles/zeros lives in the driver Python script quoted above. That part could totally also be integrated into the PR, as some even higher level on top of the current code. The fitting dosn't work as good for each attempt with a different number of poles/zeros, the above picture is basically the best one. The pdf linked in the comment has all tries in it.

Yea I should really learn to read the initial descriptions carefully 🙈

megies

comment created time in a month

Pull request review commentobspy/obspy

signal: first try at PAZ fitting from FAP info

 def estimate_wood_anderson_amplitude_using_response(response, amplitude,     return wa_ampl  +def _paz_to_freq_resp(freqs, zeros, poles, scale_fac):+    b, a = scipy.signal.ltisys.zpk2tf(zeros, poles, scale_fac)+    if not isinstance(a, np.ndarray) and a == 1.0:+        a = [1.0]+    return scipy.signal.freqs(b, a, freqs * 2 * np.pi)[1]+++def _unpack_paz(x, numpoles, numzeros):+    poles_real = np.array(x[:numpoles * 2:2])+    poles_imag = np.array(x[1:numpoles * 2:2])+    poles = poles_real + 1j * poles_imag+    zeros_real = np.array(x[numzeros * 2:numzeros * 2 + numzeros * 2:2])+    zeros_imag = np.array(x[numzeros * 2 + 1:numzeros * 2 + numzeros * 2:2])+    zeros = zeros_real + 1j * zeros_imag+    scale_fac = x[-1]+    return poles, zeros, scale_fac+++def _response_misfit(reference, other):+    """+    Calculate a misfit measure from two complex response vectors+    """+    # just the simple euclidean distance for now. tweaking this might improve+    # the fitting maybe..+    misfit = scipy.spatial.distance.euclidean(reference, other)

I think the spurious oscillations could be reduced by just limiting the maximum number of poles and zeros based on the number of initial response curve values. 20 poles and zeros are 80 degrees of freedom which is too much for the given data without some form of regularization. A smoother curve can also be enforced via regularization if so desired.

megies

comment created time in a month

Pull request review commentobspy/obspy

signal: first try at PAZ fitting from FAP info

 def estimate_wood_anderson_amplitude_using_response(response, amplitude,     return wa_ampl  +def _paz_to_freq_resp(freqs, zeros, poles, scale_fac):+    b, a = scipy.signal.ltisys.zpk2tf(zeros, poles, scale_fac)+    if not isinstance(a, np.ndarray) and a == 1.0:+        a = [1.0]+    return scipy.signal.freqs(b, a, freqs * 2 * np.pi)[1]+++def _unpack_paz(x, numpoles, numzeros):+    poles_real = np.array(x[:numpoles * 2:2])+    poles_imag = np.array(x[1:numpoles * 2:2])+    poles = poles_real + 1j * poles_imag+    zeros_real = np.array(x[numzeros * 2:numzeros * 2 + numzeros * 2:2])+    zeros_imag = np.array(x[numzeros * 2 + 1:numzeros * 2 + numzeros * 2:2])+    zeros = zeros_real + 1j * zeros_imag+    scale_fac = x[-1]+    return poles, zeros, scale_fac+++def _response_misfit(reference, other):+    """+    Calculate a misfit measure from two complex response vectors+    """+    # just the simple euclidean distance for now. tweaking this might improve+    # the fitting maybe..+    misfit = scipy.spatial.distance.euclidean(reference, other)

If this converges it is probably hard to create a better misfit functional - if you'll find a few examples where it does not converge one has to think a bit harder. I think that in particular the phase might sometimes be a bit hard to fit.

megies

comment created time in a month

Pull request review commentobspy/obspy

signal: first try at PAZ fitting from FAP info

 def estimate_wood_anderson_amplitude_using_response(response, amplitude,     return wa_ampl  +def _paz_to_freq_resp(freqs, zeros, poles, scale_fac):+    b, a = scipy.signal.ltisys.zpk2tf(zeros, poles, scale_fac)+    if not isinstance(a, np.ndarray) and a == 1.0:+        a = [1.0]+    return scipy.signal.freqs(b, a, freqs * 2 * np.pi)[1]+++def _unpack_paz(x, numpoles, numzeros):+    poles_real = np.array(x[:numpoles * 2:2])+    poles_imag = np.array(x[1:numpoles * 2:2])+    poles = poles_real + 1j * poles_imag+    zeros_real = np.array(x[numzeros * 2:numzeros * 2 + numzeros * 2:2])+    zeros_imag = np.array(x[numzeros * 2 + 1:numzeros * 2 + numzeros * 2:2])+    zeros = zeros_real + 1j * zeros_imag+    scale_fac = x[-1]+    return poles, zeros, scale_fac+++def _response_misfit(reference, other):+    """+    Calculate a misfit measure from two complex response vectors+    """+    # just the simple euclidean distance for now. tweaking this might improve+    # the fitting maybe..+    misfit = scipy.spatial.distance.euclidean(reference, other)+    return misfit+++def optimize_paz(frequencies, response, numpoles, numzeros, num_tries=3,+                 maxiter=100000, eps=1e-12):+    frequencies = np.array(frequencies)+    response = np.array(response)++    def minimize(_var):+        poles, zeros, scale_fac = _unpack_paz(+            _var, numpoles=numpoles, numzeros=numzeros)+        new_response = _paz_to_freq_resp(+            freqs=frequencies, zeros=zeros, poles=poles, scale_fac=scale_fac)+        misfit = _response_misfit(response, new_response)+        return misfit++    results = []+    for i in range(num_tries):+        out = scipy.optimize.minimize(+            fun=minimize,+            method="BFGS",+            x0=np.random.random(numpoles * 2 + numzeros * 2 + 1),+            options={"eps": eps, "maxiter": maxiter})+        results.append(out)++    misfit = np.inf+    for result in results:+        _poles, _zeros, _scale_fac = _unpack_paz(+            result.x, numpoles=numpoles, numzeros=numzeros)++        _inverted_response = _paz_to_freq_resp(+            freqs=frequencies, zeros=_zeros, poles=_poles,+            scale_fac=_scale_fac)++        _misfit = _response_misfit(response, _inverted_response)

This does not need to be recomputed here. The items in results should have it available at the .fun attribute. You can also query if it did converge and if none of the results converged, throw an error.

megies

comment created time in a month

issue commentkrischer/seismo_live

unable to build the notebooks

You can try to increase the timeout in the script: just set --ExecutePreprocessor.timeout=600 to something higher.

Thomas-Ulrich

comment created time in 2 months

issue commentkrischer/seismo_live

unable to build the notebooks

@megies Do you know what's going on here?

Thomas-Ulrich

comment created time in 2 months

issue commentkrischer/seismo_live

unable to build the notebooks

Hmm - what is your basemap version?

Thomas-Ulrich

comment created time in 2 months

issue commentkrischer/seismo_live

unable to build the notebooks

Hi Thomas,

this bug is due to newer numpy versions no longer accepting floating point numbers in the np.linspace call for the number of desired array elements.

This has to be changed in the whiten() function in the Ambient Seismic Noise/NoiseCorrelation notebook.

Thomas-Ulrich

comment created time in 2 months

issue commentSeismicData/pyasdf

setup broken between 0.5.1 and 0.6.1

No worries :-) Glad you figured it out!

viktor76525

comment created time in 2 months

issue commentSeismicData/pyasdf

setup broken between 0.5.1 and 0.6.1

I don't see what could have caused that going from 0.6.0 to 0.6.1 - the only change in the setup.py is that there is actually one less dependency: https://github.com/SeismicData/pyasdf/compare/0.6.0...0.6.1

The error log seems to indicate an issue with updating setuptools. It tries to update setuptools without the --user flag and that fails. But the log is kind of hard to parse so I'm not sure that is the case.

In general I'd recommend to also not use the --user flag but rather create a virtual environment (supported by the Python stdlib since Python 3.3 or so) or install conda and create a separate environment.

Concluding I think its an issue with your system and not pyasdf. Please let me know if I am mistaken.

viktor76525

comment created time in 2 months

pull request commentobspy/obspy

remove_response with ResponseList type response

Not sure how it could make a difference, since it's only discrete sampling points in frequency anyway?

That's a good point.

That sounds like a good option. Keeping it constant should in normal cases (of the response falling off at the ends) lead to less overamplification, since we are applying the inverse actually. For the phase we still might want to keep the spline (or some kind of linear-ish) extrapolation though..?

I guess one could do a manual linear extrapolation at the edges using the two first and two last samples? Still a bit wild but better than nothing.

And none of these changes would change existing code because existing code could never reach it :-)

megies

comment created time in 2 months

pull request commentobspy/obspy

remove_response with ResponseList type response

Yea I guess the spline interpolation makes sense here. One could think about setting ext="const" and it will no longer return the spline values (which will go crazy outside the domain) but just the boundary values. Would definitely help keep it stable. The downside is that the interpolated response will no longer be continuous in the first derivative which is maybe kind of dangerous. But probably does not matter if we throw a warning and assume a strict pre-filter is always applied.

megies

comment created time in 2 months

issue commentobspy/obspy

remove_response with ResponseList type response

I agree with @megies here. There probably are valid use cases to allow that extrapolation when one properly accounts for it (e.g. the aforementioned pre filter). So converting it to a warning is a good way and if people ignore the warning it is now our fault.

megies

comment created time in 2 months

PR closed krischer/hypoDDpy

Upgraded to python3 and some debugging

UPGRADING

ran 2to3 -p -v -w changed "import md5" to "import hashlib" Set subprocess.Popen(unversal_newlines=True) in hypodd_compiler.compile_hypodd() so that the stdout output will be a text string (as in Python 2) rather than a byte string 2 to 3 bug?: changed Exception.message to str(Exception) (lines 1104)

MODERNIZING

changed hypodd_relocator.HyopDDRelocator._parse_station_files() to read StationXML files modified station_id to not include network if {net}.{sta} > 7 characters (hypoDD can't handle more) hypodd_relocator._create_output_event_file(): changed res_id.getRefferedObject() to res_id.get_referred_object()

DEBUGGING

added shift_stations attribute (and associated code) to HypDDRelocator class

  • Still needs to shift input and output events corrected bug in _write_ph2dt_inp_file(self) where maxsep was calculated using depth differences in meters instead of km hypodd_relocator.setup_velocity_model(): Added check for > 30 model layers hypodd_relocator.compile_hypodd(): Changed MAXDATA to 3000000 (should be configurable)
+1597 -76

1 comment

9 changed files

WayneCrawford

pr closed time in 2 months

pull request commentkrischer/hypoDDpy

Upgraded to python3 and some debugging

I rebased to get rid of the extra files and manually the changes into the master branch. Thanks a lot for this!

WayneCrawford

comment created time in 2 months

push eventkrischer/hypoDDpy

Lion Krischer

commit sha 93b17a845efda36adcf29f5bf5fc42bf58001824

Update README.md

view details

push time in 2 months

push eventkrischer/hypoDDpy

Wayne Crawford

commit sha 3f59880be821209efcdc1b4fdd8fd0a45e25175a

Upgraded to python3 and some debugging UPGRADING ========= ran 2to3 -p -v -w changed "import md5" to "import hashlib" Set subprocess.Popen(unversal_newlines=True) in hypodd_compiler.compile_hypodd() so that the stdout output will be a text string (as in Python 2) rather than a byte string 2 to 3 bug?: changed Exception.message to str(Exception) (lines 1104) MODERNIZING ============ changed hypodd_relocator.HyopDDRelocator._parse_station_files() to read StationXML files modified station_id to not include network if {net}.{sta} > 7 characters hypodd_relocator._create_output_event_file(): changed res_id.getRefferedObject() to res_id.get_referred_object() DEBUGGING ========= added shift_stations attribute (and associated code) to HypDDRelocator class * Still needs to shift input and output events corrected bug in _write_ph2dt_inp_file(self) where maxsep was calculated using depth differences in meters instead of km hypodd_relocator.setup_velocity_model(): Added check for > 30 model layers hypodd_relocator.compile_hypodd(): Changed MAXDATA to 3000000 (should be configurable)

view details

Wayne Crawford

commit sha a29deef7e8e97a8a1584552d14a29b15020bb371

Update README.md Updated Python and obspy versions

view details

Lion Krischer

commit sha 501d37e1730d32aab837635a2ee20160f5c27513

Running the black code formatter.

view details

Lion Krischer

commit sha 51c39bdf9cb657a8c4fdd3d2e15e2e082383f430

Updating readme.

view details

push time in 2 months

issue commentobspy/obspy

MassDownloader sometimes fails with an error of IncompleteRead

I'd also guess that this is a network connection issue.

The mass downloader already catches a bunch of different errors, but this particular one not. To fix it, just add the http.client.IncompleteRead to the list of errors that are caught: https://github.com/obspy/obspy/blob/88687a146a7c3ca8f35c608db2243cf5fed6813c/obspy/clients/fdsn/mass_downloader/utils.py#L32

Please also consider opening a pull request so others can benefit from your changes.

wangyinz

comment created time in 2 months

issue commentkrischer/hypoDDpy

AttributeError: 'ResourceIdentifier' object has no attribute 'getReferredObject'

Best have a look at your XSEED files and try to figure out if there are elements it cannot parse.

langlami

comment created time in 3 months

push eventSeismicData/asdf_sextant

Lion Krischer

commit sha cae965ab272c91c01aec5fb820fbe27b1ff6ef84

Updating readme.

view details

push time in 3 months

push eventSeismicData/asdf_sextant

Lion Krischer

commit sha 76f05f23d41aa57e4e4b82af4080f9f112f83242

Proper pypi description.

view details

Lion Krischer

commit sha 4768e89d6865dbc196d65d68d07a8cd925cde39d

Final setup.py polish.

view details

push time in 3 months

push eventSeismicData/asdf_sextant

Lion Krischer

commit sha 6eecb172986fa0b9948b098d39a9852ba6ddbd05

Formatting with black.

view details

Lion Krischer

commit sha c70f7e591d767f901aa16423b30c2422cb7771b6

First window drawn with pyside2.

view details

Lion Krischer

commit sha 1300ffc15bba0e91bdb6002ee3f2c4b43ccab819

Restructuring.

view details

Lion Krischer

commit sha 037d415b7c411519713ed7b621e164b63b0e2176

Proper package structure.

view details

Lion Krischer

commit sha 0472716daa7f84041497d271f77fb9121adc0196

Proper package structure.

view details

Lion Krischer

commit sha 9f0662508a59ed52eccb1ea60e77d44c7d024f80

Proper package structure.

view details

Lion Krischer

commit sha 6ebf511a4647b74b9cfb5908e27e1a7debcf1493

Centering window works again.

view details

Lion Krischer

commit sha 31605cfaaa845413dc29cf8950be95f123358d29

Adding flake8 config.

view details

Lion Krischer

commit sha 1c54a6e31baca1f286c5b750c96aaa83ae51fa57

Pleasing flake8.

view details

Lion Krischer

commit sha bd5ad14b0260b3aefa188829095d1bf986d38987

We have waveforms again.

view details

Lion Krischer

commit sha e473daa33a1b017927ff9e16d953cf5be977db6c

Some layout work and some more slots are now connected.

view details

Lion Krischer

commit sha b95aafff3ad309a12cd7692ffc3339c1be5dccaa

Fix opening files.

view details

Lion Krischer

commit sha 055a9c48bf3bf1e2572785841f8d51d7aae62f84

Finished connecting all pyside2 slots.

view details

Lion Krischer

commit sha 6464e9742cb6780b1040c2460e4f83800af2d86c

Re-enabling javascript.

view details

Lion Krischer

commit sha b06741613758b4359e21fa5a0e543d8128745e73

Fixing all the javascript and enabling to open a file directly.

view details

Lion Krischer

commit sha 37c186422ef719cbae0de5371abe5295274d6e01

Fine tuning the layout.

view details

Lion Krischer

commit sha cfafb2a0f977d77a8275beee31840f13fd17245b

Updating readme.

view details

Lion Krischer

commit sha a79e87b818495c1a84ae2a69bf9039fb0410fff5

Including everything in distribution.

view details

push time in 3 months

issue commentkrischer/hypoDDpy

AttributeError: 'ResourceIdentifier' object has no attribute 'getReferredObject'

Hi Mickael,

the method has been renamed to snake-case, e.g. .get_referred_object(): https://github.com/obspy/obspy/blob/88687a146a7c3ca8f35c608db2243cf5fed6813c/obspy/core/event/resourceid.py#L309

Hope it helps!

Lion

langlami

comment created time in 3 months

delete branch krischer/pyasdf-feedstock

delete branch : 0.6.1

delete time in 3 months

push eventconda-forge/pyasdf-feedstock

Lion Krischer

commit sha 18bedbfa7cc6e3532c0b56f07c5816ec235b6e9b

updating to 0.6.1

view details

Lion Krischer

commit sha e53ae8aaa1e1e0a2843ac8e7887f5f8147390316

Merge pull request #12 from krischer/0.6.1 Updating to 0.6.1

view details

push time in 3 months

PR opened conda-forge/pyasdf-feedstock

Updating to 0.6.1
+2 -2

0 comment

1 changed file

pr created time in 3 months

create barnchkrischer/pyasdf-feedstock

branch : 0.6.1

created branch time in 3 months

push eventkrischer/pyasdf-feedstock

Lion Krischer

commit sha 7b0d9c12c8eaecbad5f8921a97273f39078f220f

Updating to 0.5.0

view details

conda-forge-admin

commit sha 0a903134a6b797c844562d2383f8040891f2eb7e

MNT: Re-rendered with conda-build 3.18.9, conda-smithy 3.4.8, and conda-forge-pinning 2019.09.08

view details

Lion Krischer

commit sha 9d8a2bcea9b99a28375877b28e17b4c9170aea5c

Changing build number to zero.

view details

Lion Krischer

commit sha e9e1b2c9e9d905eb15d57087d3c54b713140677e

Updating to 0.5.1

view details

Lion Krischer

commit sha c8b7b67b892c721e8466ab4292fefdf77524dd46

Merge pull request #8 from krischer/0.5.0 Updating to 0.5.0

view details

conda-forge-admin

commit sha 65aa0952b1a127c754c14396c8e3611cb51bef20

[ci skip] [skip ci] [cf admin skip] ***NO_CI*** admin migration AutomergeAndRerender

view details

Lion Krischer

commit sha dce9995e32424658b604522e6019d58d4e98c780

updating to 0.6.0

view details

Lion Krischer

commit sha 68c52d659f30c09e40c11cb92017a3e9d88cca19

Updating license identifier.

view details

Lion Krischer

commit sha 30d262bf7fae1e003b2371a37bc689348cbb09e4

Limit to Python >= 3.7:

view details

Lion Krischer

commit sha 4720a5775982b459a573d48a3463cbcbf21bab75

Linting recipe.

view details

Lion Krischer

commit sha 8d6e567e37216866983f24444bf3ebb6a3074419

I can't read error messages.

view details

conda-forge-linter

commit sha 6e1023978df291f25bda0d949abcd1ec521d24a6

MNT: Re-rendered with conda-build 3.19.1, conda-smithy 3.6.14, and conda-forge-pinning 2020.03.19

view details

Lion Krischer

commit sha 4ef752e15cd34c3ca48a63a75229f27a5ab1b929

Merge pull request #11 from krischer/0.6.0 Updating to 0.6.0

view details

push time in 3 months

push eventSeismicData/pyasdf

Travis

commit sha 04bc61815f151d257df9a3f6541c980f73b8d67a

Travis build 190 pushed to gh-pages

view details

push time in 3 months

push eventSeismicData/pyasdf

Lion Krischer

commit sha 25c85eb1d851d700e530cc94b21bcfacc9ddf103

Pytest is no longer a runtime dependency.

view details

Lion Krischer

commit sha bf30680927c471c0a7da17e96c10a4f0ec84404e

releasing 0.6.1

view details

Lion Krischer

commit sha 8431d2d185ab3b3c94b0f82bccf3f79fc3769456

Typo.

view details

push time in 3 months

created tagSeismicData/pyasdf

tag0.6.1

Python Interface to ASDF based on ObsPy

created time in 3 months

more