profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/baryluk/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

baryluk/erljs 115

Erlang in the web browser

baryluk/ex11 16

Joe Armstrong implementation of X11 protocol for Erlang

baryluk/eircd 8

High-performance distributed IRC server

baryluk/cords 7

Cord (rope) implementation for D programming language, with rich high performance string processing library

baryluk/erozja 5

Simple RSS feed client with GUI, and RSS aggregator webserver with own feeds and web interface.

baryluk/denes 4

Simple DNS server and framework in Erlang

baryluk/erlyjs 3

Fork of Roberto Saccon's ErlyJs (just copied from hassy to not lost this)

baryluk/bbtree 2

Binary search trees of bounded balance for Erlang - with rich API.

baryluk/common_collection 2

A simple and ugly wrapper, to easly switch to different collections (orddict, gb_trees, dict, rbdict, etc) in Erlang

baryluk/echo 2

PHP-like echo for D programing language with variable expansions in the string

issue commentpypa/setuptools

No --install_requires option to replace deprecated --requires

This is really needed. I want to install all required dependencies (as specified in install_requires in setup.py), but not install the package itself. This is so I can create a Docker layer, with all dependencies, that can be cached, then after it install the package itself in the final layer. This way rebuilds will be almost instant instead of waiting minutes.

traylenator

comment created time in 10 days

issue commenthashicorp/packer-plugin-docker

Docker provisioner caching

Yeah. Lack of caching is the primary reason I stopped using packer. I migrated back to Dockerfile and BuildKit approach.

ghost

comment created time in 10 days

PR opened Xilinx-CNS/onload

Set monitoring thread name in efforward test

Useful for cpu profiling and isolation.

Do not check for errors.

pthread_setname_np is non-Posix, but is present in Glibc, FreeBSD, OpenBSD, QNX, Solaris and AIX, not that any of that matter much. So do not bother protecting with #if defined(_GNU_SOURCE).

+3 -0

0 comment

1 changed file

pr created time in 12 days

push eventbaryluk/onload

Witold Baryluk

commit sha f1492696be7d38418eb595fe484788f60c5f2bd2

Set monitoring thread name in efforward test Useful for cpu profiling and isolation. Do not check for errors. pthread_setname_np is non-Posix, but is present in Glibc, FreeBSD, OpenBSD, QNX, Solaris and AIX, not that any of that matter much. So do not bother protecting with `#if defined(_GNU_SOURCE)`.

view details

push time in 12 days

issue commentlinkedin/iris

Docker build instructions do not work

Same problem here. The packer configs use very old Ubuntu too, 16:04, and most of the python scripts use #!/usr/bin/env python instead of #!/usr/bin/env python3, which causes a lot of issues.

prestonvanloon

comment created time in 15 days

Pull request review commentapache/airflow

Do not fail KubernetesPodOperator tasks if log following fails

 def monitor_pod(self, pod: V1Pod, get_logs: bool) -> Tuple[State, V1Pod, Optiona             read_logs_since_sec = None             last_log_time = None             while True:-                logs = self.read_pod_logs(pod, timestamps=True, since_seconds=read_logs_since_sec)-                for line in logs:-                    timestamp, message = self.parse_log_line(line.decode('utf-8'))-                    self.log.info(message)-                    if timestamp:-                        last_log_time = timestamp+                try:+                    logs = self.read_pod_logs(pod, timestamps=True, since_seconds=read_logs_since_sec)+                    for line in logs:+                        timestamp, message = self.parse_log_line(line.decode('utf-8'))+                        self.log.info(message)+                        if timestamp:+                            last_log_time = timestamp+                except BaseHTTPError as e:+                    # Catches errors like ProtocolError(TimeoutError).+                    self.log.warning(+                        'Failed to read logs for pod %s with exception %s',+                        pod.metadata.name,+                        e,+                        exc_info=True,+                    )

Please take another look now. The helm chart test failure, looks to be unrelated

baryluk

comment created time in 19 days

PullRequestReviewEvent

push eventbaryluk/airflow

Witold Baryluk

commit sha b79831ed7953efcacce26cc182d8c9c6fb9534bb

Do not fail KubernetesPodOperator tasks if log reading fails In very long running airflow tasks using KubernetesPodOperator, especially when airflow is running in a different k8s cluster than where the pod is started with, we see sporadic, but reasonably frequent failures like this, after 5-13 hours of runtime: [2021-08-16 04:00:25,871] {pod_launcher.py:198} INFO - Event: foo-bar.xyz had an event of type Running [2021-08-16 04:00:25,893] {pod_launcher.py:149} INFO - 210816.0400+0000 app-specific-logs... ... ... (~few log lines ever few minutes from the app) ... [2021-08-16 17:20:29,585] {pod_launcher.py:149} INFO - 210816.1720+0000 app-specific-logs.... [2021-08-16 17:27:36,105] {taskinstance.py:1501} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 436, in _error_catcher yield File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 763, in read_chunked self._update_chunk_length() File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 693, in _update_chunk_length line = self._fp.fp.readline() File "/usr/local/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/usr/local/lib/python3.7/ssl.py", line 1071, in recv_into return self.read(nbytes, buffer) File "/usr/local/lib/python3.7/ssl.py", line 929, in read return self._sslobj.read(len, buffer) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task result = task_copy.execute(context=context) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 366, in execute final_state, remote_pod, result = self.create_new_pod_for_operator(labels, launcher) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 520, in create_new_pod_for_operator final_state, remote_pod, result = launcher.monitor_pod(pod=self.pod, get_logs=self.get_logs) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 147, in monitor_pod for line in logs: File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 807, in __iter__ for chunk in self.stream(decode_content=True): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 571, in stream for line in self.read_chunked(amt, decode_content=decode_content): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 792, in read_chunked self._original_response.close() File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 454, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: TimeoutError(110, 'Connection timed out')", TimeoutError(110, 'Connection timed out')) Most likely because the task is not emitting a lot of logs, or simply due to sporadic network slowdown between clusters. So, if this fails, do not fail whole operator and terminate the task, until the call to `self.base_container_is_running` function also fails or returns false.

view details

push time in 19 days

push eventbaryluk/airflow

bbenshalom

commit sha ab7658147445161fa3f7f2b139fbf9c223877f77

Add DAG run endpoint for marking a dagrun success or failed(#17839) Co-authored-by: bbenshalom <bbenshalom@outbrain.com> Co-authored-by: Tzu-ping Chung <uranusjr@gmail.com> Co-authored-by: Ephraim Anierobi <splendidzigy24@gmail.com>

view details

Brent Bovenzi

commit sha f7276353ccd5d15773eea6c0d90265650fd22ae3

Fix blank dag dependencies view (#17990) * Fix blank dag dependencies view * calculate graph if node and edges are empty

view details

Bas Harenslak

commit sha ca4f99d349e664bbcf58d3c84139b5f4919f6c8e

Serialize the template_ext attribute to show it in UI (#17985) Co-authored-by: Bas Harenslak <bas@astronomer.io>

view details

Fiyin

commit sha 1e1b3de6e45db32af86d851619dab333f4f52491

Fix grammar in local.rst (#18001)

view details

Jed Cunningham

commit sha 7b3a5f95cd19667a683e92e311f6c29d6a9a6a0b

Hide variable import form if user lacks permission (#18000) This hides the variable import form if the user does not have the "can create on variable" permission.

view details

Witold Baryluk

commit sha c56af5f06a2838bcf3f535d295fb698407057131

Do not fail KubernetesPodOperator tasks if log reading fails In very long running airflow tasks using KubernetesPodOperator, especially when airflow is running in a different k8s cluster than where the pod is started with, we see sporadic, but reasonably frequent failures like this, after 5-13 hours of runtime: [2021-08-16 04:00:25,871] {pod_launcher.py:198} INFO - Event: foo-bar.xyz had an event of type Running [2021-08-16 04:00:25,893] {pod_launcher.py:149} INFO - 210816.0400+0000 app-specific-logs... ... ... (~few log lines ever few minutes from the app) ... [2021-08-16 17:20:29,585] {pod_launcher.py:149} INFO - 210816.1720+0000 app-specific-logs.... [2021-08-16 17:27:36,105] {taskinstance.py:1501} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 436, in _error_catcher yield File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 763, in read_chunked self._update_chunk_length() File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 693, in _update_chunk_length line = self._fp.fp.readline() File "/usr/local/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/usr/local/lib/python3.7/ssl.py", line 1071, in recv_into return self.read(nbytes, buffer) File "/usr/local/lib/python3.7/ssl.py", line 929, in read return self._sslobj.read(len, buffer) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task result = task_copy.execute(context=context) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 366, in execute final_state, remote_pod, result = self.create_new_pod_for_operator(labels, launcher) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 520, in create_new_pod_for_operator final_state, remote_pod, result = launcher.monitor_pod(pod=self.pod, get_logs=self.get_logs) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 147, in monitor_pod for line in logs: File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 807, in __iter__ for chunk in self.stream(decode_content=True): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 571, in stream for line in self.read_chunked(amt, decode_content=decode_content): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 792, in read_chunked self._original_response.close() File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 454, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: TimeoutError(110, 'Connection timed out')", TimeoutError(110, 'Connection timed out')) Most likely because the task is not emitting a lot of logs, or simply due to sporadic network slowdown between clusters. So, if this fails, do not fail whole operator and terminate the task, until the call to `self.base_container_is_running` function also fails or returns false.

view details

push time in 22 days

push eventbaryluk/airflow

Witold Baryluk

commit sha 9622b29b7a3672add829f1f571c7a47042521884

Do not fail KubernetesPodOperator tasks if log reading fails In very long running airflow tasks using KubernetesPodOperator, especially when airflow is running in a different k8s cluster than where the pod is started with, we see sporadic, but reasonably frequent failures like this, after 5-13 hours of runtime: [2021-08-16 04:00:25,871] {pod_launcher.py:198} INFO - Event: foo-bar.xyz had an event of type Running [2021-08-16 04:00:25,893] {pod_launcher.py:149} INFO - 210816.0400+0000 app-specific-logs... ... ... (~few log lines ever few minutes from the app) ... [2021-08-16 17:20:29,585] {pod_launcher.py:149} INFO - 210816.1720+0000 app-specific-logs.... [2021-08-16 17:27:36,105] {taskinstance.py:1501} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 436, in _error_catcher yield File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 763, in read_chunked self._update_chunk_length() File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 693, in _update_chunk_length line = self._fp.fp.readline() File "/usr/local/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/usr/local/lib/python3.7/ssl.py", line 1071, in recv_into return self.read(nbytes, buffer) File "/usr/local/lib/python3.7/ssl.py", line 929, in read return self._sslobj.read(len, buffer) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task result = task_copy.execute(context=context) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 366, in execute final_state, remote_pod, result = self.create_new_pod_for_operator(labels, launcher) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 520, in create_new_pod_for_operator final_state, remote_pod, result = launcher.monitor_pod(pod=self.pod, get_logs=self.get_logs) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 147, in monitor_pod for line in logs: File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 807, in __iter__ for chunk in self.stream(decode_content=True): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 571, in stream for line in self.read_chunked(amt, decode_content=decode_content): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 792, in read_chunked self._original_response.close() File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 454, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: TimeoutError(110, 'Connection timed out')", TimeoutError(110, 'Connection timed out')) Most likely because the task is not emitting a lot of logs, or simply due to sporadic network slowdown between clusters. So, if this fails, do not fail whole operator and terminate the task, until the call to `self.base_container_is_running` function also fails or returns false.

view details

push time in 23 days

push eventbaryluk/airflow

Witold Baryluk

commit sha 2a80f0ad83ae3d0954b5c52d1fd5a3bf962acd8d

Do not fail KubernetesPodOperator tasks if log reading fails In very long running airflow tasks using KubernetesPodOperator, especially when airflow is running in a different k8s cluster than where the pod is started with, we see sporadic, but reasonably frequent failures like this, after 5-13 hours of runtime: [2021-08-16 04:00:25,871] {pod_launcher.py:198} INFO - Event: foo-bar.xyz had an event of type Running [2021-08-16 04:00:25,893] {pod_launcher.py:149} INFO - 210816.0400+0000 app-specific-logs... ... ... (~few log lines ever few minutes from the app) ... [2021-08-16 17:20:29,585] {pod_launcher.py:149} INFO - 210816.1720+0000 app-specific-logs.... [2021-08-16 17:27:36,105] {taskinstance.py:1501} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 436, in _error_catcher yield File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 763, in read_chunked self._update_chunk_length() File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 693, in _update_chunk_length line = self._fp.fp.readline() File "/usr/local/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/usr/local/lib/python3.7/ssl.py", line 1071, in recv_into return self.read(nbytes, buffer) File "/usr/local/lib/python3.7/ssl.py", line 929, in read return self._sslobj.read(len, buffer) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task result = task_copy.execute(context=context) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 366, in execute final_state, remote_pod, result = self.create_new_pod_for_operator(labels, launcher) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 520, in create_new_pod_for_operator final_state, remote_pod, result = launcher.monitor_pod(pod=self.pod, get_logs=self.get_logs) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 147, in monitor_pod for line in logs: File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 807, in __iter__ for chunk in self.stream(decode_content=True): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 571, in stream for line in self.read_chunked(amt, decode_content=decode_content): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 792, in read_chunked self._original_response.close() File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 454, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: TimeoutError(110, 'Connection timed out')", TimeoutError(110, 'Connection timed out')) Most likely because the task is not emitting a lot of logs, or simply due to sporadic network slowdown between clusters. So, if this fails, do not fail whole operator and terminate the task, until the call to `self.base_container_is_running` function also fails or returns false.

view details

push time in 23 days

PullRequestReviewEvent

Pull request review commentapache/airflow

Do not fail KubernetesPodOperator tasks if log following fails

 def read_pod_logs(                 **additional_kwargs,             )         except BaseHTTPError as e:-            raise AirflowException(f'There was an error reading the kubernetes API: {e}')+            self.log.warning(f'There was an error reading the kubernetes API: {e}')+            # Reraise to be catched by self.monitor_pod.+            raise

Done.

baryluk

comment created time in 23 days

push eventbaryluk/airflow

Lionel

commit sha 5350cc2791b780a8ce721e565c235921a63565c1

Example xcom update (#17749)

view details

Aakcht

commit sha eda8a51f9c14cc537378dbc7e493e3d00cac694f

Fix docs about login for hdfs connections (#17936)

view details

Kaxil Naik

commit sha dd8b04d815d2ec81ca16164e418814e466675802

Fix grammar in `traceback.html` (#17942) `those` -> `these`

view details

Jarek Potiuk

commit sha a29503e1baf7534c85ebf6685ba91003051c1cea

Fix instantiating Vault Secret Backend during configuration (#17935) When Secrets Backend are instantiated during configuration, not all Airlfow packages are yet imported, because they need Secret Backends. We have a weird cyclical relation between models, configuration and settins which forces us to be extra careful around configuration, settings and backends. In this case top-level import of Connections by the Vault Secret Backend triggered cyclic import problem (importing airflow models require configuration to be fully loaded and initialized) but then it could not be initialized because models needed to be imported first. The fix is to move Connections to be locally imported.

view details

GregKarabinos

commit sha eebfeec4fa798b728141dfdeb4cf70970c71067f

Update docker.rst (#17882) This didn't work on my 2019 Macbook 16 inch until I ran the Linux section. Co-authored-by: Jarek Potiuk <jarek@potiuk.com> Co-authored-by: Kaxil Naik <kaxilnaik@gmail.com>

view details

andrew-candela

commit sha 1aca908941884ac094a4df7a1d378126f9918749

Adds Github Oauth example with team based authorization (#17896)

view details

Sam Wheating

commit sha 9befaeec701e926182d011f374a34729bdc1604a

Fixing bug which restricted the visibility of ImportErrors (#17924)

view details

deedmitrij

commit sha 500780651cfef9254d5e365c0de6f8c7af6d05bf

Add possibility to run DAGs from system tests and see DAGs logs (#17868)

view details

Brent Bovenzi

commit sha 887ef6b9de4f84f40a6992c863554eb59d64dcc0

Add Next Run to UI (#17732) * add next run to home page * add nextrun to dag pages * keep date check consistent * fix test_views * Include data interval values in /last_dagruns view * Use dag.next_dagrun_create_after * Fix timezone formatting Use `<time>` to use our existing `datetime_utils` to format and handle timezone changes * Update next and last run tooltips * wrap meta tags in if statement Co-authored-by: Tzu-ping Chung <tp@astronomer.io>

view details

eladkal

commit sha 601f22cbf1e9c90b94eda676cedd596afc118254

Refactor BranchDayOfWeekOperator, DayOfWeekSensor (#17940) * Refactor BranchDayOfWeekOperator, DayOfWeekSensor. 1. Extract shared code to utils. 2. Allow any iterable as week_day.

view details

aa1371

commit sha 226a2b8c33d28cd391717191efb4593951d1f90c

Queue support for DaskExecutor using Dask Worker Resources (#16829)

view details

Ash Berlin-Taylor

commit sha 9c19f0db7dd39103ac9bc884995d286ba8530c10

Improve MySqlToHiveOperator tests (#17958) These tests were actually just testing the hook again (by checking the command executed) but were asserting nothing about the CSV passed to the hook. This change makes the operator tests check the logic in the operator and no longer tests the hook code again. `test_mysql_to_hive_verify_loaded_values` was removed as since we removed Java/a real hive CLI from our tests this has only been testing our mock, not the real code.

view details

Brent Bovenzi

commit sha ee93935bab6e5841b48a07028ea701d9aebe0cea

Only show Pause/Unpause tooltip on hover (#17957) After clicking on the Pause/Unpause toggle, the element remained in focus and therefore the toggle wouldn't go away. After a change event we will also trigger a blur event to remove the focus so the tooltip will only appear on hover. Fixes: #16500

view details

Jens Larsson

commit sha fac06a19500a7645cc8226b656d3a068423e4ff4

Add robots.txt and X-Robots-Tag header (#17946) Co-authored-by: thejens <jens.larsson@tink.com>

view details

Kaxil Naik

commit sha fbdb6882e4789c55c751b8be466eed89945d3241

Fix passing Jinja templates in ``DateTimeSensor`` (#17959) While fixing ``DateTimeSensorAsync`` in https://github.com/apache/airflow/pull/17747 -- I broke ``DateTimeSensor``. As `target_time` is a template_field for `DateTimeSensor`, Jinja tries to render it which does not work if the input is a datetime object or if someone passes just a template field like ``{{ execution_date }} `` it throws an error: ``` DateTimeSensor(task_id="foo", target_time="{{ execution_time }}" ) ' Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/pendulum/parsing/__init__.py", line 131, in _parse dt = parser.parse( File "/usr/local/lib/python3.9/site-packages/dateutil/parser/_parser.py", line 1368, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "/usr/local/lib/python3.9/site-packages/dateutil/parser/_parser.py", line 643, in parse raise ParserError("Unknown string format: %s", timestr) dateutil.parser._parser.ParserError: Unknown string format: {{ execution_time }} During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 3, in <module> File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 186, in apply_defaults result = func(self, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/airflow/sensors/date_time.py", line 66, in __init__ self.target_time = timezone.parse(target_time) File "/usr/local/lib/python3.9/site-packages/airflow/utils/timezone.py", line 175, in parse return pendulum.parse(string, tz=timezone or TIMEZONE, strict=False) # type: ignore File "/usr/local/lib/python3.9/site-packages/pendulum/parser.py", line 29, in parse return _parse(text, **options) File "/usr/local/lib/python3.9/site-packages/pendulum/parser.py", line 45, in _parse parsed = base_parse(text, **options) File "/usr/local/lib/python3.9/site-packages/pendulum/parsing/__init__.py", line 74, in parse return _normalize(_parse(text, **_options), **_options) File "/usr/local/lib/python3.9/site-packages/pendulum/parsing/__init__.py", line 135, in _parse raise ParserError("Invalid date string: {}".format(text)) pendulum.parsing.exceptions.ParserError: Invalid date string: {{ execution_time }} ``` This PR fixes it by reverting change in `DateTimeSensor` and parses the string to datetime in `DateTimeSensorAsync.execute`

view details

David Lum

commit sha 16b47cecfb5cf88b0176a59589cbd77e0eaccfd3

Invalidate Vault cached prop when not authenticated (#17387)

view details

Mario Taddeucci

commit sha fe34582fc2f418b96a5dc5c10b8b6a8b48bdb7ea

New google operator: SQLToGoogleSheetsOperator (#17887)

view details

kiwy42

commit sha 0791719d09818e8313c4b4e172c993397236323c

Add support for kinit options [-f|-F] and [-a|-A] (#17816) kinit can now emit non forwardable ticket and ticket without originate IP.

view details

Yash Dodeja

commit sha 683fbd49d7873edd90632926e15d51d970aa00fb

Fix Clear task instances endpoint resets all DAG runs bug (#17961)

view details

Josh Fell

commit sha fdbb798b9d3f58a33200c012cb546d60c06fc84f

Making spelling of "TaskFlow" consistent in docs (#17968)

view details

push time in 23 days

push eventbaryluk/airflow

Witold Baryluk

commit sha 8af9479722179a1db5532a4ba513e8e9d8eca47d

Do not fail KubernetesPodOperator tasks if log reading fails In very long running airflow tasks using KubernetesPodOperator, especially when airflow is running in a different k8s cluster than where the pod is started with, we see sporadic, but reasonably frequent failures like this, after 5-13 hours of runtime: [2021-08-16 04:00:25,871] {pod_launcher.py:198} INFO - Event: foo-bar.xyz had an event of type Running [2021-08-16 04:00:25,893] {pod_launcher.py:149} INFO - 210816.0400+0000 app-specific-logs... ... ... (~few log lines ever few minutes from the app) ... [2021-08-16 17:20:29,585] {pod_launcher.py:149} INFO - 210816.1720+0000 app-specific-logs.... [2021-08-16 17:27:36,105] {taskinstance.py:1501} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 436, in _error_catcher yield File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 763, in read_chunked self._update_chunk_length() File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 693, in _update_chunk_length line = self._fp.fp.readline() File "/usr/local/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/usr/local/lib/python3.7/ssl.py", line 1071, in recv_into return self.read(nbytes, buffer) File "/usr/local/lib/python3.7/ssl.py", line 929, in read return self._sslobj.read(len, buffer) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task result = task_copy.execute(context=context) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 366, in execute final_state, remote_pod, result = self.create_new_pod_for_operator(labels, launcher) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 520, in create_new_pod_for_operator final_state, remote_pod, result = launcher.monitor_pod(pod=self.pod, get_logs=self.get_logs) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 147, in monitor_pod for line in logs: File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 807, in __iter__ for chunk in self.stream(decode_content=True): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 571, in stream for line in self.read_chunked(amt, decode_content=decode_content): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 792, in read_chunked self._original_response.close() File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 454, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: TimeoutError(110, 'Connection timed out')", TimeoutError(110, 'Connection timed out')) Most likely because the task is not emitting a lot of logs, or simply due to sporadic network slowdown between clusters. So, if this fails, do not fail whole operator and terminate the task, until the call to `self.base_container_is_running` function also fails or returns false.

view details

push time in 23 days

issue closedapache/airflow

Some timezones in Web UI shows as number, not timezone name

Apache Airflow version

2.1.2

Operating System

Linux

Versions of Apache Airflow Providers

2.1.2

Deployment

Other 3rd-party Helm chart

Deployment details

n/a

What happened

Go to Web UI,

search for example for "Chile/Continental" , after clicking it will be added to the list of timezones, but will be displayed as "-04 (-04:00)".

Screenshot from 2021-08-30 10-25-55

It happens for many other timezones.

What you expected to happen

"CLT (-04:00)" at this time of a year maybe?

How to reproduce

Clear

Anything else

n/a

Are you willing to submit PR?

  • [ ] Yes I am willing to submit a PR!

Code of Conduct

closed time in 24 days

baryluk

issue commentapache/airflow

Some timezones in Web UI shows as number, not timezone name

Ok. Thanks.

baryluk

comment created time in 24 days

Pull request review commentapache/airflow

Do not fail KubernetesPodOperator tasks if log following fails

 def read_pod_logs(                 **additional_kwargs,             )         except BaseHTTPError as e:-            raise AirflowException(f'There was an error reading the kubernetes API: {e}')+            self.log.warning(f'There was an error reading the kubernetes API: {e}')+            # Reraise to be catched by self.monitor_pod.+            raise

Ok about self.log.exception.

But what about raise? I think it is still needed. This is so tanacity can do raise too, and be proper exception be catched in monitor_pod

baryluk

comment created time in 24 days

PullRequestReviewEvent

pull request commentapache/airflow

Do not fail KubernetesPodOperator tasks if log following fails

@jedcunningham Ready for review.

baryluk

comment created time in 25 days

push eventbaryluk/airflow

Witold Baryluk

commit sha fe4ba1462c9f5d125ef25339a4756920aac65d02

Do not fail KubernetesPodOperator tasks if log reading fails In very long running airflow tasks using KubernetesPodOperator, especially when airflow is running in a different k8s cluster than where the pod is started with, we see sporadic, but reasonably frequent failures like this, after 5-13 hours of runtime: [2021-08-16 04:00:25,871] {pod_launcher.py:198} INFO - Event: foo-bar.xyz had an event of type Running [2021-08-16 04:00:25,893] {pod_launcher.py:149} INFO - 210816.0400+0000 app-specific-logs... ... ... (~few log lines ever few minutes from the app) ... [2021-08-16 17:20:29,585] {pod_launcher.py:149} INFO - 210816.1720+0000 app-specific-logs.... [2021-08-16 17:27:36,105] {taskinstance.py:1501} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 436, in _error_catcher yield File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 763, in read_chunked self._update_chunk_length() File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 693, in _update_chunk_length line = self._fp.fp.readline() File "/usr/local/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/usr/local/lib/python3.7/ssl.py", line 1071, in recv_into return self.read(nbytes, buffer) File "/usr/local/lib/python3.7/ssl.py", line 929, in read return self._sslobj.read(len, buffer) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task result = task_copy.execute(context=context) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 366, in execute final_state, remote_pod, result = self.create_new_pod_for_operator(labels, launcher) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 520, in create_new_pod_for_operator final_state, remote_pod, result = launcher.monitor_pod(pod=self.pod, get_logs=self.get_logs) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 147, in monitor_pod for line in logs: File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 807, in __iter__ for chunk in self.stream(decode_content=True): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 571, in stream for line in self.read_chunked(amt, decode_content=decode_content): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 792, in read_chunked self._original_response.close() File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 454, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: TimeoutError(110, 'Connection timed out')", TimeoutError(110, 'Connection timed out')) Most likely because the task is not emitting a lot of logs, or simply due to sporadic network slowdown between clusters. So, if this fails, do not fail whole operator and terminate the task, until the call to `self.base_container_is_running` function also fails or returns false.

view details

push time in 25 days

push eventbaryluk/airflow

Witold Baryluk

commit sha 75c79b9917ef3db6eac7dcaa8c8547e4f098c7e4

Do not fail KubernetesPodOperator tasks if log reading fails In very long running airflow tasks using KubernetesPodOperator, especially when airflow is running in a different k8s cluster than where the pod is started with, we see sporadic, but reasonably frequent failures like this, after 5-13 hours of runtime: [2021-08-16 04:00:25,871] {pod_launcher.py:198} INFO - Event: foo-bar.xyz had an event of type Running [2021-08-16 04:00:25,893] {pod_launcher.py:149} INFO - 210816.0400+0000 app-specific-logs... ... ... (~few log lines ever few minutes from the app) ... [2021-08-16 17:20:29,585] {pod_launcher.py:149} INFO - 210816.1720+0000 app-specific-logs.... [2021-08-16 17:27:36,105] {taskinstance.py:1501} ERROR - Task failed with exception Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 436, in _error_catcher yield File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 763, in read_chunked self._update_chunk_length() File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 693, in _update_chunk_length line = self._fp.fp.readline() File "/usr/local/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/usr/local/lib/python3.7/ssl.py", line 1071, in recv_into return self.read(nbytes, buffer) File "/usr/local/lib/python3.7/ssl.py", line 929, in read return self._sslobj.read(len, buffer) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task result = task_copy.execute(context=context) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 366, in execute final_state, remote_pod, result = self.create_new_pod_for_operator(labels, launcher) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 520, in create_new_pod_for_operator final_state, remote_pod, result = launcher.monitor_pod(pod=self.pod, get_logs=self.get_logs) File "/opt/pysetup/.venv/lib/python3.7/site-packages/airflow/providers/cncf/kubernetes/utils/pod_launcher.py", line 147, in monitor_pod for line in logs: File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 807, in __iter__ for chunk in self.stream(decode_content=True): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 571, in stream for line in self.read_chunked(amt, decode_content=decode_content): File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 792, in read_chunked self._original_response.close() File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/opt/pysetup/.venv/lib/python3.7/site-packages/urllib3/response.py", line 454, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: TimeoutError(110, 'Connection timed out')", TimeoutError(110, 'Connection timed out')) Most likely because the task is not emitting a lot of logs, or simply due to sporadic network slowdown between clusters. So, if this fails, do not fail whole operator and terminate the task, until the call to `self.base_container_is_running` function also fails or returns false.

view details

push time in 25 days

push eventbaryluk/airflow

Jarek Potiuk

commit sha 0282f4167d50409da5b0b9dab898ea76e53d837c

Forces rebuilding the image for cache pushing (#17635) Fixes bug in pushing latest image to cache on "push/schedule". When the build is successful and passes all tests vi either `push' or 'schedule' events, we attempt to rebuild the image with latest constraints just pushed and push it as a fresh cache for Github Registry. This keeps the time to build image small without manually refreshing the cache, it also automatically checks if there is a new "python" base image available so that we can use it in the new cache. There was a bug that the image has not been FORCE_PULLED and rebuilt in this case - just latest images were used. This had so far no negative effects because due to test instability, latest main images pretty much never succeeded in all tests, so the images in `main` were refreshed manually periodically anyway. However for v2-1-test the scope of tests run is far smaller now (no Helm tests, no Provider tests) and they succeed mostly when they should. Also PROD image was built without ".dev0" suffix which also failed. This PR fixes it so that the images are built properly and pushed.

view details

Ash Berlin-Taylor

commit sha e99624d2f17f0aadcb992827e9e721f5e6c978d8

Speed up tests that use BackfillJob (#17648) Calling `heartbeat` was putting in a sleep in which isn't necessary/useful in tests, where we want it to run as quick as possible. The sleep has been kept in "normal" mode as otherwise the status output (`[backfill progress] | finished run %s of %s |` etc.) will be essentially spammed, rather than only being printed every few seconds. This makes the tests/jobs/ run in about 20s (vs 120s without the change.)

view details

Jan Omar

commit sha c22ed08ec4d8a9def1f09e74d51eafee83f87e8c

Chart: use serviceaccount template for log reader rolebinding (#17645)

view details

James Timmins

commit sha 808fb2ad3024f4855fb11a99128b9f67bad44a26

Add Changelog updates for 2.1.3 (#17644) Co-authored-by: Kaxil Naik <kaxilnaik@gmail.com>

view details

Kamil Breguła

commit sha 495535b81e0ac9fbbc25d39aa3b1ad1303417c42

Update pre-commit checks-flynt to 0.66 (#17672) Additionally, we now download flynt configurations from the official repository, which allows us to automatically download updates using the pre-commit autoupdate command

view details

Jed Cunningham

commit sha dfdffa6a4922a85405f9b71520129590c2188ec8

Fix link to generating constraints in BREEZE.rst (#17670)

view details

Kaxil Naik

commit sha 7c96800426946e563d12fdfeb80b20f4c0002aaf

Dev: Remove duplicate step to push Docker Image (#17674) We have the same step few lines below "## Prepare production Docker Image"

view details

Kaxil Naik

commit sha 1cd3d8f94f23396e2f1367822fecc466db3dd170

Update docs on syncing forks (#17675) * Update docs on syncing forks closes https://github.com/apache/airflow/issues/17665

view details

Ash Berlin-Taylor

commit sha d8c0cfea5ff679dc2de55220f8fc500fadef1093

Have the dag_maker fixture (optionally) give SerializedDAGs (#17577) All but one test in test_scheduler_job.py wants to operate on serialized dags, so it makes sense to have this be done in the dag_maker for us, to make each test "smaller".

view details

Jed Cunningham

commit sha 6868ca48b29915aae8c131d694ea851cff1717de

Avoid endless redirect loop when user has no roles (#17613)

view details

Ephraim Anierobi

commit sha cbd9ad2ffaa00ba5d99926b05a8905ed9ce4e698

Remove the use of multiprocessing in TestLocalTaskJob and Improve Tests (#17581) This PR removes the use of multiprocessing in TestLocalTaskJob and improves the test to be more reliable Co-authored-by: Ash Berlin-Taylor <ash_github@firemirror.com>

view details

Guilherme Martins Crocetti

commit sha 83a2858dcbc8ecaa7429df836b48b72e3bbc002a

Docs: Make ``DAG.is_active`` read-only in API (#17667) Add readOnly=True property on DAG.is_active closes: #17639

view details

Jarek Potiuk

commit sha be9911c0855d4d7d4533cc0f82df63d5845a6d7f

Renames main workflow to `Tests` (#17650) This is a long-overdue change for CI workflows. Since we are building images in a separate workflow, the `CI Builds` name of the workflow was - first of all misleading, and secondly - too long. The workflow names displayed in the GitHub UI contains the workflow name as prefix so having as short as possible name is an advantage. The `Tests` names seems to be appropriate because this is in fact what we do in this workflow. The change updates the name of workflow as well as documentation that referred to it and fixes a few inconsistencies found in names of the `Build Image` -> `Build Images` workflow. The sequence diagrams showing the CI workflow have been also regenerated with the new name (thanks to mermaid it was super-easy)

view details

Jarek Potiuk

commit sha 4e59741ff9be87d6aced1164812ab03deab259c8

Remove legacy image convention (#17692) The image convention has been changed recently and we kept it for a while to allow PRs to run without rebasing. More than a week happened since and we can remove the legacy option now. Follow up after #17356

view details

Jed Cunningham

commit sha 986381159ee3abf3442ff496cb553d2db004e6c4

Chart: fix running with uid 0 (#17688)

view details

Kanthi

commit sha 9b2e593fd4c79366681162a1da43595584bd1abd

Fix sqlite hook - insert and replace functions (#17695)

view details

Sam Wheating

commit sha 9922287a4f9f70b57635b04436ddc4cfca0e84d2

Replace execution_date with run_id in airflow tasks run command (#16666) Co-authored-by: Ash Berlin-Taylor <ash@apache.org>

view details

Aakcht

commit sha 0016007b86c6dd4a6c6900fa71137ed065acfe88

hdfs provider: allow SSL webhdfs connections (#17637)

view details

James Timmins

commit sha c6982040732a5e3a3348d83f552fb92c54839b04

Add steps for building release package without breeze. (#17702) * Add steps for building release package without breeze. * Update dev/README_RELEASE_AIRFLOW.md Co-authored-by: Kaxil Naik <kaxilnaik@gmail.com>

view details

N

commit sha 09828a5512b86b93ee82eb8affeae4448f2460e4

docs(impersonation): update note so avoid misintrepretation (#17701)

view details

push time in 25 days

push eventbaryluk/airflow

Witold Baryluk

commit sha c7da06de25aedfe714cc465a942a0beec92678f3

Test that KubernetesPodOperator log failures are not fatal

view details

push time in 25 days

pull request commentapache/airflow

Do not fail KubernetesPodOperator tasks if log following fails

Back from vacation. Working on it now.

baryluk

comment created time in 25 days

issue openedapache/airflow

Some timezones in Web UI shows as number, not timezone name

Apache Airflow version

2.1.2

Operating System

Linux

Versions of Apache Airflow Providers

2.1.2

Deployment

Other 3rd-party Helm chart

Deployment details

n/a

What happened

Go to Web UI,

search for example for "Chile/Continental" , after clicking it will be added to the list of timezones, but will be displayed as "-04 (-04:00)".

Screenshot from 2021-08-30 10-25-55

It happens for many other timezones.

What you expected to happen

"CLT (-04:00)" at this time of a year maybe?

How to reproduce

Clear

Anything else

n/a

Are you willing to submit PR?

  • [ ] Yes I am willing to submit a PR!

Code of Conduct

created time in a month

issue commentmikefarah/yq

base64 encode/decode pipeline option

I think yq should support this, just like jq. It is very common to have some data in base64 in yaml files, i.e. in kubernetes secrets, but wishing to convert back and forth.

With single value it is possible to do this by using extenral command, no problem.

But how would you convert this jq script to yq?

khe get secret airflow-secrets -n airflow -o json  | jq '.data | with_entries({key: (.key), value:(.value|@base64d)})'| yq -P e -

not so trivial.

avoidik

comment created time in a month

pull request commentapache/airflow

Do not fail KubernetesPodOperator tasks if log following fails

We also need test coverage for this change.

That makes sense. Let me take a look at mocking facilities available to test this.

baryluk

comment created time in a month

Pull request review commentapache/airflow

Do not fail KubernetesPodOperator tasks if log following fails

 def base_container_is_running(self, pod: V1Pod):             return False         return status.state.running is not None -    @tenacity.retry(stop=tenacity.stop_after_attempt(3), wait=tenacity.wait_exponential(), reraise=True)+    @tenacity.retry(stop=tenacity.stop_after_attempt(4), wait=tenacity.wait_exponential(), reraise=True)

Removed.

baryluk

comment created time in a month