profile
viewpoint
Jan Eglinger imagejan Friedrich Miescher Institute for Biomedical Research Basel, Switzerland Bio-Image Analyst / Image Data Scientist in the Facility for Advanced Imaging and Microscopy at FMI Basel

imagej/example-script-collection 5

Example project demonstrating how to package ImageJ scripts into a single jar file

imagejan/blind-experiment 4

An ImageJ plugin to assist with blind analysis of image data.

fmi-basel/faim-robocopy 1

A python-based UI for robocopy

imagej/imagej-plugins-batch 1

Batch Processor Plugins for ImageJ

imagejan/angiotool 1

A trial to convert AngioTool ( https://ccrod.cancer.gov/confluence/display/ROB2/Home ) into a Fiji plugin

familie-westerkamp/pyfam 0

PyFam - a Django-based web app for genealogical networks

imagejan/ABA_J 0

Brain atlasing toolkit for ImageJ

imagejan/ActionBar 0

Custom toolbars and mini applications with ActionBar. Proceedings of the 2nd ImageJ User and Developer conference, Luxembourg, November 6-7th, 2008. http://dx.doi.org/10.5281/zenodo.35653

issue commentmwouts/jupytext

ModuleNotFoundError: No module named 'jupytext'

Great news! Thanks for letting me know.

schuhegger

comment created time in a few seconds

issue openedautonomio/talos

Error using Talos with Unsupervised Learning on LSTM/Autoencoder Model

Hi, I am trying to use Talos to optimize the hyperparameters on an unsupervised LSTM/Autoencoder model. The model works without Talos. Since I do not have y data (no known labels / dependent variables), so I created my model as follows below. And the data input is called "scaled_data".

set parameters for Talos

p = {'optimizer': ['Nadam', 'Adam', 'sgd'], 'losses': ['binary_crossentropy', 'mse'], 'activation':['relu', 'elu']}

create autoencoder model

def create_model(X_input, y_input, params): autoencoder = Sequential() autoencoder.add(LSTM(12, input_shape=(scaled_data.shape[1], scaled_data.shape[2]), activation=params['activation'], return_sequences=True, kernel_regularizer=tf.keras.regularizers.l2(0.01))) autoencoder.add(LSTM(4, activation=params['activation'])) autoencoder.add(RepeatVector(scaled_data.shape[1])) autoencoder.add(LSTM(4, activation=params['activation'], return_sequences=True)) autoencoder.add(LSTM(12, activation=params['activation'], return_sequences=True)) autoencoder.add(TimeDistributed(Dense(scaled_data.shape[2]))) autoencoder.compile(optimizer=params['optimizer'], loss=params['losses'], metrics=['acc'])

history = autoencoder.fit(X_input, y_input, epochs=10, batch_size=1, validation_split=0.0,
                          callbacks=[EarlyStopping(monitor='acc', patience=3)]).history

return autoencoder, history

scan_object = talos.Scan(x=scaled_data, y=scaled_data, params=p, model=create_model, experiment_name='LSTM')

My error says: TypeError: create_model() takes 3 positional arguments but 5 were given.

How am I passing 5 arguments? Any ideas how to fix this issue? I looked through the documents and other questions, but don't see anything with an unsupervised model. Thank you!

created time in a few seconds

issue commentmwouts/jupytext

Jupytext on JupyterLab 3.0.5 is not automatically syncing notebooks on save

Thanks @djakubiec for your reports. Thanks for testing jupytext==1.8.2, this way I can exclude a potential regression in 1.9.0.

Is this an option for you to install jupyter and jupytext in a conda environment as documented in the last paragraph in this comment?

That might be easier than trying to use config files (and sorry you can't use the .json file here because of https://github.com/jupyterlab/nbclassic/issues/41, I'll try to see how to fix that...)

djakubiec

comment created time in a minute

issue commentmarcomusy/vedo

3D elements is not rendered correctly

@marcomusy Btw. The bug with basix only affects second order geometries. If we consider the data from vtu above, we can visualize the real ordering (when taking the ordering of the topology into account:

import vedo
import numpy as np
geo = np.array([[1., 0., 0.],
                [0., 0., 0.],
                [0., 1., 0.],
                [1., 1., 0.],
                [0., 0., 1.],
                [1., 0., 1.],
                [0., 1., 1.],
                [1., 1., 1.]])
topo = np.array([1, 4, 6, 2, 0, 5, 7, 3])
geo2 = np.zeros((len(topo), 3))
for i in range(len(topo)):
    geo2[i] = geo[topo[i], :]

pts = vedo.Points(geo2)
vedo.show(pts, pts.labels("id"))
vedo.screenshot("mesh.png")

yielding mesh which is the counter clockwise order (I admit that the choice of axis is weird, we will have a fix in dolfinx for this tomorrow).

jorgensd

comment created time in 3 minutes

Pull request review commentapache/superset

fix: missing key when verifying adhoc filters in merge_extra_filters

 def get_filter_key(f: Dict[str, Any]) -> str:         for existing in adhoc_filters:             if (                 existing["expressionType"] == "SIMPLE"-                and existing["comparator"] is not None-                and existing["subject"] is not None+                and "comparator" in existing+                and "subject" in existing

OK, I made this change

bryanck

comment created time in 5 minutes

issue openedapache/superset

[dashboard]native filter collapse icon should hide in edit mode

Native filter bar is designed to be used on View mode only. In Edit mode, the collapse icon is showing and overlapping with the other component control icons. suggest to hide it in Edit mode to avoid confusion Screen Shot 2021-01-20 at 11 02 52 AM

https://user-images.githubusercontent.com/67837651/105222917-f8b23e00-5b0f-11eb-8810-7dc1696146b9.mov

cc @agatapst @kkucharc

created time in 7 minutes

pull request commentmwouts/jupytext

Enable integration with pre-commit

Hi @JohnPaton , @Skylion007 , we're getting ready!

I am done with the review and I'd like to discuss a few details:

  1. The example with --to in the doc needs to be updated to match the one actually tested (use e.g. https://github.com/mwouts/jupytext/commit/07b323b92da78d24b7f6c0887adc8f9d30607f8f)
  2. Maybe we should temporarily remove the --pipe example from the documentation - at least I can't get it to work in the tests for now. I think option 3 from my previous comment could help here, but I'd prefer to re-add the example later on when it is ready.
  3. How do you want to see this merged? Should I squash or rebase? If we squash, are you both OK for giving the full authorship to John (and I'll cite you both in the changelog)? Sorry for asking, it is the first time I see a PR with so many commits and contributors :smile:
  4. Github says that there is a conflict with the base branch, is it easy for you to solve? (Well if we decide to squash, don't bother with that, as I should be able to do it locally)
JohnPaton

comment created time in 8 minutes

push eventhms-dbmi/viv

ilan-gold

commit sha f11a3c2b4d8f445722f4631d2e51765de356c191

Fix out of range fetch isssue. (#354)

view details

push time in 10 minutes

delete branch hms-dbmi/viv

delete branch : ilan-gold/fix_geotiff_outofrange_issue

delete time in 10 minutes

PR merged hms-dbmi/viv

Fix out of range fetch isssue for Tiff

Addresses https://github.com/geotiffjs/geotiff.js/issues/193 on our end. Here is the diff with our current branch: https://github.com/ilan-gold/geotiff.js/compare/ilan-gold/viv_release_080...ilan-gold:ilan-gold/viv_083?expand=1#

Basically, it is possible for tiff fetch sources to request data outside the size of the file which results in a 416 error from the server. This PR upgrades our geotiff version to one that makes sure that no requests are made for data outside that range.

You can test this out using http://localhost:8080/?image_url=https://vitessce-demo-data.storage.googleapis.com/test-data/VAN0006-LK-2-85-IMS_PosMode_multilayer.ome.tif?token= (and on Avivator to see the difference) and then selecting the last channel - in Avivator, it will not work but here it will.

+4 -3

0 comment

3 changed files

ilan-gold

pr closed time in 10 minutes

Pull request review commentapache/superset

fix: dict key lookup

 def get_filter_key(f: Dict[str, Any]) -> str:         for existing in adhoc_filters:             if (                 existing["expressionType"] == "SIMPLE"-                and existing["comparator"] is not None-                and existing["subject"] is not None+                and "comparator" in existing+                and "subject" in existing

Maybe use existing.get("subject") is not None to make sure the logic is still the same?

bryanck

comment created time in 11 minutes

issue commentapache/superset

Certified metric icons are various sizes

@etr2460 Thanks for reporting. we are aware of this issue. icon migration project has been put on-hold until v1.0.1 release.

etr2460

comment created time in 12 minutes

push eventilastik/ilastik

Dominik Kutra

commit sha 7fa4d32650d3e9274c6b4950ddfc84cd2c064827

use conda installed tifffile in plugin with tifffile 0.17 (that we currently distribute) tifffile is not longer shipped as part of skimage, but as a dependency.

view details

Dominik Kutra

commit sha 3e80569d12a378197aa2298b93fcb1eb18504c0c

Merge pull request #2373 from ilastik/tifffile.external-removed use conda installed tifffile in plugin

view details

push time in 17 minutes

PR merged ilastik/ilastik

use conda installed tifffile in plugin

with tifffile 0.17 (that we currently distribute) tifffile is not longer shipped as part of skimage, but as a dependency.

also tifffile.imsave is deprecated in favor of tifffile.imwrite

+2 -17

0 comment

2 changed files

k-dominik

pr closed time in 17 minutes

issue openedtidyverse/ggplot2

contour fails when coordinates are not aligned with axes

Below I have regularly spaced points that are not aligned with the plot axes. I want to plot contour lines but geom_contour seems to fail here. I vaguely remember this used to work in some former ggplot version, but I tried with versions 3.3.2 and 3.3.0 and those also don't plot contours for this data set so I might be wrong... I would expect geom_contour to be able to handle this since the points are fairly regularly spaced (disregarding minor rounding errors from the rotation calculation I've done here).

For context: these could be spatial data points (XY coordinates) placed on a regular grid for which I've obtained several continuous variables, e.g. groundwater levels and chloride concentrations.

library(ggplot2)

df <- expand.grid(x = 1:10,
                  y = 1:10)
df$z <- c(volcano[53:62, 29:38]) # arbitrary

ggplot(df, aes(x = x, y = y)) +
  geom_point() +
  geom_contour(aes(z = z)) +
  coord_fixed()

# rotate points 15 degrees counterclockwise
angle <- atan(df$y/df$x) * 180/pi + 15
df$rotx <- cos(angle * pi/180) * sqrt(df$x^2 + df$y^2)
df$roty <- sin(angle * pi/180) * sqrt(df$x^2 + df$y^2)

p <- ggplot(df, aes(x = rotx, y = roty)) +
  geom_point() +
  coord_fixed()

p + geom_contour(aes(z = z))
#> Warning: stat_contour(): Zero contours were generated
#> Warning in min(x): no non-missing arguments to min; returning Inf
#> Warning in max(x): no non-missing arguments to max; returning -Inf

<sup>Created on 2021-01-20 by the reprex package (v0.3.0)</sup> <details> <summary> Session info </summary>

devtools::session_info()
#> - Session info ---------------------------------------------------------------
#>  setting  value                       
#>  version  R version 4.0.3 (2020-10-10)
#>  os       Windows 10 x64              
#>  system   x86_64, mingw32             
#>  ui       RTerm                       
#>  language (EN)                        
#>  collate  English_Belgium.1252        
#>  ctype    English_Belgium.1252        
#>  tz       Europe/Paris                
#>  date     2021-01-20                  
#> 
#> - Packages -------------------------------------------------------------------
#>  package     * version date       lib source        
#>  assertthat    0.2.1   2019-03-21 [1] CRAN (R 4.0.2)
#>  callr         3.5.1   2020-10-13 [1] CRAN (R 4.0.3)
#>  cli           2.2.0   2020-11-20 [1] CRAN (R 4.0.3)
#>  colorspace    2.0-0   2020-11-11 [1] CRAN (R 4.0.3)
#>  crayon        1.3.4   2017-09-16 [1] CRAN (R 4.0.2)
#>  curl          4.3     2019-12-02 [1] CRAN (R 4.0.2)
#>  DBI           1.1.1   2021-01-15 [1] CRAN (R 4.0.3)
#>  desc          1.2.0   2018-05-01 [1] CRAN (R 4.0.2)
#>  devtools      2.3.2   2020-09-18 [1] CRAN (R 4.0.3)
#>  digest        0.6.27  2020-10-24 [1] CRAN (R 4.0.3)
#>  dplyr         1.0.3   2021-01-15 [1] CRAN (R 4.0.3)
#>  ellipsis      0.3.1   2020-05-15 [1] CRAN (R 4.0.2)
#>  evaluate      0.14    2019-05-28 [1] CRAN (R 4.0.2)
#>  fansi         0.4.2   2021-01-15 [1] CRAN (R 4.0.3)
#>  farver        2.0.3   2020-01-16 [1] CRAN (R 4.0.2)
#>  fs            1.5.0   2020-07-31 [1] CRAN (R 4.0.3)
#>  generics      0.1.0   2020-10-31 [1] CRAN (R 4.0.3)
#>  ggplot2     * 3.3.3   2020-12-30 [1] CRAN (R 4.0.3)
#>  glue          1.4.2   2020-08-27 [1] CRAN (R 4.0.3)
#>  gtable        0.3.0   2019-03-25 [1] CRAN (R 4.0.2)
#>  highr         0.8     2019-03-20 [1] CRAN (R 4.0.2)
#>  htmltools     0.5.0   2020-06-16 [1] CRAN (R 4.0.3)
#>  httr          1.4.2   2020-07-20 [1] CRAN (R 4.0.3)
#>  isoband       0.2.3   2020-12-01 [1] CRAN (R 4.0.3)
#>  knitr         1.30    2020-09-22 [1] CRAN (R 4.0.3)
#>  labeling      0.4.2   2020-10-20 [1] CRAN (R 4.0.3)
#>  lifecycle     0.2.0   2020-03-06 [1] CRAN (R 4.0.2)
#>  magrittr      2.0.1   2020-11-17 [1] CRAN (R 4.0.3)
#>  memoise       1.1.0   2017-04-21 [1] CRAN (R 4.0.2)
#>  mime          0.9     2020-02-04 [1] CRAN (R 4.0.0)
#>  munsell       0.5.0   2018-06-12 [1] CRAN (R 4.0.2)
#>  pillar        1.4.7   2020-11-20 [1] CRAN (R 4.0.3)
#>  pkgbuild      1.2.0   2020-12-15 [1] CRAN (R 4.0.3)
#>  pkgconfig     2.0.3   2019-09-22 [1] CRAN (R 4.0.2)
#>  pkgload       1.1.0   2020-05-29 [1] CRAN (R 4.0.2)
#>  prettyunits   1.1.1   2020-01-24 [1] CRAN (R 4.0.2)
#>  processx      3.4.5   2020-11-30 [1] CRAN (R 4.0.3)
#>  ps            1.5.0   2020-12-05 [1] CRAN (R 4.0.3)
#>  purrr         0.3.4   2020-04-17 [1] CRAN (R 4.0.2)
#>  R6            2.5.0   2020-10-28 [1] CRAN (R 4.0.3)
#>  remotes       2.2.0   2020-07-21 [1] CRAN (R 4.0.3)
#>  rlang         0.4.10  2020-12-30 [1] CRAN (R 4.0.3)
#>  rmarkdown     2.6     2020-12-14 [1] CRAN (R 4.0.3)
#>  rprojroot     2.0.2   2020-11-15 [1] CRAN (R 4.0.3)
#>  scales        1.1.1   2020-05-11 [1] CRAN (R 4.0.2)
#>  sessioninfo   1.1.1   2018-11-05 [1] CRAN (R 4.0.2)
#>  stringi       1.5.3   2020-09-09 [1] CRAN (R 4.0.3)
#>  stringr       1.4.0   2019-02-10 [1] CRAN (R 4.0.2)
#>  testthat      3.0.1   2020-12-17 [1] CRAN (R 4.0.3)
#>  tibble        3.0.5   2021-01-15 [1] CRAN (R 4.0.3)
#>  tidyselect    1.1.0   2020-05-11 [1] CRAN (R 4.0.2)
#>  usethis       2.0.0   2020-12-10 [1] CRAN (R 4.0.3)
#>  vctrs         0.3.6   2020-12-17 [1] CRAN (R 4.0.3)
#>  withr         2.4.0   2021-01-16 [1] CRAN (R 4.0.3)
#>  xfun          0.20    2021-01-06 [1] CRAN (R 4.0.3)
#>  xml2          1.3.2   2020-04-23 [1] CRAN (R 4.0.2)
#>  yaml          2.2.1   2020-02-01 [1] CRAN (R 4.0.0)
#> 
#> [1] C:/Users/casne/OneDrive/Documenten/R/library_cas
#> [2] C:/Program Files/R/R-4.0.3/library

</details>

created time in 18 minutes

issue openedapache/superset

Certified metric icons are various sizes

Screenshot

image

Description

When multiple metrics are certified, it seems like the icons are different sizes. No clue why, but it seems to be introduced with the new dataset panel

Design input

They should all be the same size

created time in 20 minutes

create barnchvanvalenlab/deepcell-tf

branch : deepcell-cpu

created branch time in 21 minutes

Pull request review commentmwouts/jupytext

Enable integration with pre-commit

 Note that these hooks do not update the `.ipynb` notebook when you pull. Make su  ## Using Jupytext with the pre-commit package manager -Using Jupytext with the [pre-commit package manager](https://pre-commit.com/) is another option. You could add the following to your `.pre-commit-config.yaml` file:-```+Using Jupytext with the [pre-commit package manager](https://pre-commit.com/) is another option. You could add the following to your `.pre-commit-config.yaml` file to convert all staged notebooks to python scripts in `py:percent` format (the default):++```yaml repos:--   repo: local+-   repo: https://github.com/mwouts/jupytext+    rev: master     hooks:     - id: jupytext-      name: jupytext-      entry: jupytext --to md-      files: .ipynb-      language: python ``` -Here is another `.pre-commit-config.yaml` example that uses the --pre-commit mode of Jupytext to convert all `.ipynb` notebooks to `py:light` representation and unstage the `.ipynb` files before committing.-```+You can also provide arguments to Jupytext in pre-commit, for example to produce several kinds of output files:++```yaml repos:-  --    repo: local+-   repo: https://github.com/mwouts/jupytext+    rev: master     hooks:-      --        id: jupytext-        name: jupytext-        entry: jupytext --from ipynb --to py:light --pre-commit-        pass_filenames: false-        language: python-      --        id: unstage-ipynb-        name: unstage-ipynb-        entry: git reset HEAD **/*.ipynb-        pass_filenames: false-        language: system+    - id: jupytext+      args: [--to, py:light]+    - id: jupytext+      args: [--to, markdown]+```++If you are combining Jupytext with other pre-commit hooks, you must ensure that all hooks will pass on any files you generate. For example, if you have a hook for using `black` to format all your python code, then you should use Jupytext's `--pipe` option to also format newly generated Python scripts before writing them: +```yaml+repos:+-   repo: https://github.com/mwouts/jupytext+    rev: master+    hooks:+    - id: jupytext+      args: [--to, py:percent, --pipe, black]+-   repo: https://github.com/psf/black

@JohnPaton , what would you think of removing this example? I was trying to add a test for it, but we seem to have tricky issues - maybe we add that in another PR?

@requires_pre_commit
def test_pre_commit_hook_sync_black(tmpdir):
    # get the path and revision of this repo, to use with pre-commit
    repo_root = str(Path(__file__).parent.parent.resolve())
    repo_rev = system("git", "rev-parse", "HEAD", cwd=repo_root).strip()

    git = git_in_tmpdir(tmpdir)

    # set up the tmpdir repo with pre-commit
    pre_commit_config_yaml = dedent(
        f"""
        repos:
        - repo: {repo_root}
          rev: {repo_rev}
          hooks:
          - id: jupytext
            args: [--sync, --pipe, black]
            additional_dependencies:
              - black==19.10b0 # Matches hook
        
        - repo: https://github.com/psf/black
          rev: 19.10b0
          hooks:
          - id: black
            language_version: python3
        """
    )
    tmpdir.join(".pre-commit-config.yaml").write(pre_commit_config_yaml)
    git("add", ".pre-commit-config.yaml")
    with tmpdir.as_cwd():
        pre_commit(["install", "--install-hooks"])

    # write test notebook and output file
    nb = new_notebook(
        cells=[new_code_cell("1+1")],
        metadata={
            "jupytext": {"formats": "ipynb,py:percent", "main_language": "python"}
        },
    )
    nb_file = tmpdir.join("test.ipynb")
    py_file = tmpdir.join("test.py")
    write(nb, str(nb_file))

    git("add", ".")
    # First attempt to commit fails with message
    # "Output file test.py is not tracked in the git index"
    with pytest.raises(SystemExit):
        git("commit", "-m", "fails")

    git("commit", "-m", "succeeds")

    assert "test.ipynb" in git("ls-files")
    assert "test.py" in git("ls-files")
    # py_file should have been reformated by black
    assert "\n1 + 1\n" in py_file.read_text()
JohnPaton

comment created time in 22 minutes

issue commentmarcomusy/vedo

3D elements is not rendered correctly

It would be great if it was possible to extend UGrid to take in the topology, geometry, and celltype and return a mesh using the VTK arbitrary ordered lagrange elements

by reading at the links I think it should be doable!

jorgensd

comment created time in 22 minutes

issue commentapache/superset

Standalone embedded charts don't render in iframes

Will verify after today's release, thanks!

etr2460

comment created time in 23 minutes

issue openedManimCommunity/manim

Division by zero upon instancing of PointCloudDot

Unexpected Behavior

Instantiating a PointCloudDot results in a division by zero. The problematic line can be found here.

Analysis

The variable r is defined at r in np.arange(0, self.radius, self.epsilon). Since np.arange() includes the lower bound, r will assume the value zero and thus leads to a division by zero in the next line. To avoid that division, it's enough to change the lower bound of np.arange() to self.epsilon. r=0 does not occur anymore, while other values of r remain unchanged.

Minimal Example

class PointCloudDotTest(ThreeDScene):
    def construct(self):
        p = PointCloudDot()
        self.play(FadeIn(p))

PointCloudDotTest().render()

created time in 23 minutes

PR opened apache/superset

fix: Stabilize and deprecate legacy alerts module

SUMMARY

We are seeing exponential growth of alert.run_query tasks in our Celery queues, likely due to the permissive scheduling window combined with multiple task retries. This PR 1) reduces the schedule window from 1hr to a few minutes, 2) reduces the retry count from 5 to 1, and 3) removes the arbitrary soft_time_limit value.

Also adding a deprecation notice to this module, as it's been replaced by https://github.com/apache/superset/pull/11711

TEST PLAN

Alert scheduling should function without exponential growth of celery task queues.

ADDITIONAL INFORMATION

<!--- Check any relevant boxes with "x" --> <!--- HINT: Include "Fixes #nnn" if you are fixing an existing issue -->

  • [ ] Has associated issue:
  • [ ] Changes UI
  • [ ] Requires DB Migration.
  • [ ] Confirm DB Migration upgrade and downgrade tested.
  • [ ] Introduces new feature or API
  • [ ] Removes existing feature or API
+8 -6

0 comment

1 changed file

pr created time in 23 minutes

Pull request review commentmwouts/jupytext

Enable integration with pre-commit

 Note that these hooks do not update the `.ipynb` notebook when you pull. Make su  ## Using Jupytext with the pre-commit package manager -Using Jupytext with the [pre-commit package manager](https://pre-commit.com/) is another option. You could add the following to your `.pre-commit-config.yaml` file:-```+Using Jupytext with the [pre-commit package manager](https://pre-commit.com/) is another option. You could add the following to your `.pre-commit-config.yaml` file to sync all staged notebooks:++```yaml repos:--   repo: local+-   repo: https://github.com/mwouts/jupytext+    rev: #CURRENT_TAG/COMMIT_HASH     hooks:     - id: jupytext-      name: jupytext-      entry: jupytext --to md-      files: .ipynb-      language: python+      args: [--sync] ``` -Here is another `.pre-commit-config.yaml` example that uses the --pre-commit mode of Jupytext to convert all `.ipynb` notebooks to `py:light` representation and unstage the `.ipynb` files before committing.+You can provide almost all command line arguments to Jupytext in pre-commit, for example to produce several kinds of output files:++```yaml+repos:+-   repo: https://github.com/mwouts/jupytext+    rev: #CURRENT_TAG/COMMIT_HASH+    hooks:+    - id: jupytext+      args: [--from, ipynb, --to, py:light, --to, markdown]

We still have to update the doc - would you like to take https://github.com/mwouts/jupytext/commit/07b323b92da78d24b7f6c0887adc8f9d30607f8f?

JohnPaton

comment created time in 24 minutes

create barnchmwouts/jupytext

branch : JohnPaton-pre-commit-hooks

created branch time in 25 minutes

push eventapache/superset

Michael S. Molina

commit sha c85b4c75b11c2525861d1ece6aef46be9a9b0e20

Fix translation files and update documentation (#12595)

view details

push time in 27 minutes

PR merged apache/superset

Reviewers
fix: translation files and update documentation Turing

SUMMARY

This PR fixes translation files and updates translation generation docs.

Previously most translations were commented and line number references in PO files were broken. When we followed the instructions in the section Translating of CONTRIBUTING.md all new texts were also commented and old texts were kept in translation files causing maintainability issues. Since all translation file texts were commented the PO to JSON generation also failed because it wouldn't identify any change.

To fix these issues I applied the following process in this PR:

  • Uncommented all translation file texts
  • Updated all translation files with new texts and line number references
  • Merged previous translations with updated files
  • Removed all old texts (unused)
  • Changed the section Translating of CONTRIBUTING.md with new instructions to reflect this workflow
  • Enabled i18n and tested language shifts
  • I also created a script to help PO to JSON translation

Now if we follow the new translation instructions we'll have our files being correctly updated without introducing maintainability issues.

I also fully translated Superset to Brazilian Portuguese 🙌🏼 . This has two major objectives:

  • Support a new language
  • Have a fully completed translation file. This will enable translation completion tests. As you can see in the screenshots some parts of the UI are not being translated and this means that we have texts in the application that are not encapsulated by translation functions.

The last contribution of this PR is about version release. We should always update our translation files when we release a new version. This ensures that customers who rely on translation or community contributions have access to the updated PO and JSON files for the version.

This PR is also a requirement for our capitalization work # 12343 because we need the translation process to work correctly so that we can change the texts and not lose the previous translations.

@junlincc @rusackas @ktmud @mihir174

BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF

Screen Shot 2021-01-19 at 9 03 03 AM Screen Shot 2021-01-19 at 9 03 27 AM Screen Shot 2021-01-19 at 9 04 13 AM Screen Shot 2021-01-19 at 9 04 36 AM Screen Shot 2021-01-19 at 9 05 15 AM Screen Shot 2021-01-19 at 9 05 40 AM Screen Shot 2021-01-19 at 9 07 04 AM Screen Shot 2021-01-19 at 9 07 27 AM Screen Shot 2021-01-19 at 9 08 00 AM Screen Shot 2021-01-19 at 9 08 15 AM

TRANSLATION FIXES TEST PLAN

1 - Change any file that contains translatable texts 2 - Follow the guidelines in the section Translating of CONTRIBUTING.md 3 - Check that translation files are updated and previous translations are kept intact

BRAZILIAN PORTUGUESE SUPPORT TEST PLAN

1 - Enable i18n (to do that you can comment line 290 in superset/config.py) 2 - Change language to Brazilian Portuguese in the right upper corner flag icon 3 - Navigate through all modules and see translated texts

ADDITIONAL INFORMATION

  • [ ] Has associated issue:
  • [x] Changes UI
  • [ ] Requires DB Migration.
  • [ ] Confirm DB Migration upgrade and downgrade tested.
  • [ ] Introduces new feature or API
  • [ ] Removes existing feature or API
+98938 -42319

6 comments

26 changed files

michael-s-molina

pr closed time in 27 minutes

pull request commentapache/superset

chore(explore): added tooltips to timepicker

@srinify Srini, can either Robert or Daniel expedite getting documentation ready for this new feature? all description is written in feat(explore): time picker enhancement 🙏

zhaoyongjie

comment created time in 27 minutes

issue commentlambdaloop/anipose

anipose analyze extremely slow

Could you provide the output of conda list in the anipose environment?

grego1979

comment created time in 36 minutes

Pull request review commentscikit-image/scikit-image

Add option for not forcing grayscale convertion to image in color.label2rgb function

 def _match_label_with_color(label, colors, bg_label, bg_color):  @change_default_value("bg_label", new_value=0, changed_version="0.19") def label2rgb(label, image=None, colors=None, alpha=0.3,-              bg_label=-1, bg_color=(0, 0, 0), image_alpha=1, kind='overlay'):+              bg_label=-1, bg_color=(0, 0, 0), image_alpha=1, kind='overlay',+              saturation=0):

Other parameters like image_alpha, alpha, etc are not used when kind='avg' and there is no warning

charlielito

comment created time in 39 minutes

PR opened apache/superset

Updates to Superset Site for 1.0

SUMMARY

Updated all screenshots to now showcase Superset 1.0 design.

BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF

Screen Shot 2021-01-20 at 1 35 51 PM Screen Shot 2021-01-20 at 1 35 04 PM

TEST PLAN

  • Check for any spelling / grammar issues
  • Check /docs/creating-charts-dashboards/first-dashboard
  • Check /docs/creating-charts-dashboards/exploring-data
+216 -337

0 comment

51 changed files

pr created time in 40 minutes

more