profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/azawlocki/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

golemfactory/yagna 166

An open platform and marketplace for distributed computations

golemfactory/goth 9

Golem Test Harness, an integration testing framework for yagna - the (new) Golem Network client.

imapp-pl/ethel 4

Ethel (Œ) is a compiler for a simple programming language that generates code for the Ethereum Virtual Machine (EVM). Intended for use in testing EVM interpreters/compilers.

imapp-pl/cpp-ethereum 1

Gavin Wood's C++ implementation of the Ethereum yellowpaper.

azawlocki/azawlocki.github.io 0

Materiały do klubu programowania w SP52

azawlocki/hex2d-rs 0

Helper library for working with 2d hex-grid maps

azawlocki/KindleUnpack 0

python based software to unpack Amazon / Kindlegen generated ebooks

azawlocki/nematus 0

Open-Source Neural Machine Translation in Theano

azawlocki/scratch-curriculum 0

Term 1 and 2 of Code Club, learning Scratch

azawlocki/Snap--Build-Your-Own-Blocks 0

a visual programming language inspired by Scratch

issue commentgolemfactory/goth

Enable per-node API calls monitoring

@zakaprov Well, a single monitor for all API events is not that bad as it may seem.

It is true that the following test that uses a helper API step function wait_for_approve_agreement_response() defined in golemfactory/yagna#1585 may will, depending on the ordering of responses to approveAgreement made by two providers:

async with runner():
   ...
   wait_for_approve_agreement_response(provider_1)
   wait_for_approve_agreement_response(provider_2)

However that may merely mean that wait_for_approve_agreement_response if not especially good for the job you want to do here.

The following version is not sensitive to ordering of the two events in question:

async def agreement_approved(provider, agr_id, events):
    async for e in events:
        if is_approve_agreement(e, event_type=ApiResponse, node_name=provider.name, agr_id=agr_id):
            return True
    raise AssertionError(f"Provider {provider.name} did not approve the agreement {agr_id}")

async with runner():
    ...
    a1 = runner.add_api_assertion(partial(agreement_approved, agr_id_1, provider_1))
    a2 = runner.add_api_assertion(partial(agreement_approved, agr_id_2, provider_2))
    # This will succeed independent on the order in which `a1` and `a2` succeed:
    await a1.wait_for_result(timeout=10)
    await a2.wait_for_result(timeout=10)

The difference now is that a2.wait_for_result() does not require that a2 succeeds after a1, as it does not use the EventMonitor.wait_for_event() mechanism.

The drawback here is that the assertions a1 and a2 will only succeed if they are started before the actual events on which they wait (in contrast, EventMonitor.wait_for_event() could also examine past events) so you have to make sure a = runner.add_api_assertion(...) is not executed too late.

One solution to this problem would be to add API assertions even before the test is started, like this:

a = runner.add_api_assertion(some_assertion)
async with runner():
   ...
   await a.wait_for_result()

It has its own problems: assertion parameters such as probe names or agreement ids (as in case agreement_approved) may not be known before the test is started so they would have to be passed via global variables or as futures of some kind.

Another possibility is to create more declarative assertions. For example, instead of adding an assertion that states that the requestor will eventually accept an invoice with a particular id you could write an assertion that states that all received invoices are eventually accepted (that may be too strong a property, I don't know, but you get the idea):

async def all_invoices_accepted(events):
    not_accepted = set()
    async for e in events:
        if e is an event indicating that an invoice is received:
            not_accepted.add(invoice_id obtained from e)
        if e is an event indicating that an invoice is accepted:
            not_accepted.remove(invoice_id from e)
    if not_accepted:
        raise AssertionError('some invoices were not accepted')

runner.add_api_assertion(all_invoices_accepted)
async with runner(...):
   ...
zakaprov

comment created time in 9 days

create barnchgolemfactory/yagna

branch : goth/api-events

created branch time in 19 days

startedTheDavidDelta/lingva-translate

started time in 22 days

push eventgolemfactory/goth

azawlocki

commit sha 117a2c2052c7bf5dfb08b00c0773ed11030250eb

Adjust integration tests

view details

push time in 24 days

push eventgolemfactory/goth

azawlocki

commit sha 9f7e15bd3e50f10e25f6be10f78bca275054fd12

Adjust unit tests

view details

push time in 24 days

push eventgolemfactory/goth

azawlocki

commit sha b145b7a2aa8647713848a6d71f71b785a14f7990

Explicitly disconnect yagna containers before stopping Docker network (#540) * Explicitly disconnect yagna containers before stopping Docker network * Update black to 21.7b0 * Update link to an issue in a docstring for stop_network()

view details

azawlocki

commit sha 55752cecac114b6ff09c0a3b0b3c15342ec0d5a2

Modify APIEvent hierarchy, add more API event predicates

view details

azawlocki

commit sha 89f0822d2a7fc6d0dbc890bf6bed3427f0a02e68

Add args and kwargs params to EventMonitor.wait_for_event()

view details

azawlocki

commit sha cb1faa9e9c915314a87b1b9630b9f743605e43bd

Add Runner.wait_for_api_event() method

view details

azawlocki

commit sha cd206aaf0124e1d227b7615a39ddb157dea5aa55

Changes in logging configuration (separate loggers MITM proxy addons)

view details

azawlocki

commit sha 9aa8c7aa8c92ce2a77ac507e6ea60849064ad282

Add some colors to step.py

view details

push time in 24 days

PR opened golemfactory/goth

Make waiting for API events in tests more convenient
+609 -134

0 comment

12 changed files

pr created time in 24 days

create barnchgolemfactory/goth

branch : az/api-events

created branch time in 24 days

Pull request review commentgolemfactory/yapapi

Decouple scripts from WorkContext

+"""Stuff."""++import abc+from functools import partial+import json+from os import PathLike+from pathlib import Path+from typing import Callable, Iterable, List, Optional, Dict, Union, Any, Awaitable, TYPE_CHECKING+++from yapapi.events import DownloadStarted, DownloadFinished+from yapapi.script.capture import CaptureContext+from yapapi.storage import StorageProvider, Source, Destination, DOWNLOAD_BYTES_LIMIT_DEFAULT+++if TYPE_CHECKING:+    from yapapi.ctx import WorkContext+++# For example: { "start": { "args": [] } }+BatchCommand = Dict[str, Dict[str, Union[str, List[str]]]]+++class Command(abc.ABC):+    def evaluate(self, ctx: "WorkContext") -> BatchCommand:+        """Evaluate and serialize this command."""++    async def after(self, ctx: "WorkContext") -> None:+        """A hook to be executed on requestor's end after the script has finished."""+        pass++    async def before(self, ctx: "WorkContext") -> None:+        """A hook to be executed on requestor's end before the script is sent to the provider."""+        pass++    @staticmethod+    def _make_batch_command(cmd_name: str, **kwargs) -> Awaitable[BatchCommand]:+        kwargs = dict((key[1:] if key[0] == "_" else key, value) for key, value in kwargs.items())+        return {cmd_name: kwargs}+++class Deploy(Command):+    """Command which deploys a given runtime on the provider."""++    def evaluate(self, ctx: "WorkContext"):+        return self._make_batch_command("deploy")+++class Start(Command):+    """Command which starts a given runtime on the provider."""++    def __init__(self, *args: str):+        self.args = args++    def __repr__(self):+        return f"start{self.args}"++    def evaluate(self, ctx: "WorkContext"):+        return self._make_batch_command("start", args=self.args)+++class Terminate(Command):+    """Command which terminates a given runtime on the provider."""++    def evaluate(self, ctx: "WorkContext"):+        return self._make_batch_command("terminate")+++class _SendContent(Command, abc.ABC):+    def __init__(self, dst_path: str):+        self._dst_path = dst_path+        self._src: Optional[Source] = None++    @abc.abstractmethod+    async def _do_upload(self, storage: StorageProvider) -> Source:+        pass++    def evaluate(self, ctx: "WorkContext"):+        return self._make_batch_command(+            "transfer", _from=self._src.download_url, _to=f"container:{self._dst_path}"+        )++    async def before(self, ctx: "WorkContext"):+        self._src = await self._do_upload(ctx._storage)++    async def after(self, ctx: "WorkContext") -> None:+        assert self._src is not None+        await ctx._storage.release_source(self._src)+++class SendBytes(_SendContent):+    """Command which schedules sending bytes data to a provider."""++    def __init__(self, data: bytes, dst_path: str):+        """Create a new SendBytes command.++        :param data: bytes to send+        :param dst_path: remote (provider) destination path+        """+        super().__init__(dst_path)+        self._data: Optional[bytes] = data++    async def _do_upload(self, storage: StorageProvider) -> Source:+        assert self._data is not None, "buffer unintialized"+        src = await storage.upload_bytes(self._data)+        self._data = None+        return src+++class SendJson(SendBytes):+    """Command which schedules sending JSON data to a provider."""++    def __init__(self, data: dict, dst_path: str):+        """Create a new SendJson command.++        :param data: dictionary representing JSON data to send+        :param dst_path: remote (provider) destination path+        """+        super().__init__(json.dumps(data).encode(encoding="utf-8"), dst_path)+++class SendFile(_SendContent):+    """Command which schedules sending a file to a provider."""++    def __init__(self, src_path: str, dst_path: str):+        """Create a new SendFile command.++        :param src_path: local (requestor) source path+        :param dst_path: remote (provider) destination path+        """+        super(SendFile, self).__init__(dst_path)+        self._src_path = Path(src_path)++    async def _do_upload(self, storage: StorageProvider) -> Source:+        return await storage.upload_file(self._src_path)+++class Run(Command):+    """Command which schedules running a shell command on a provider."""++    def __init__(+        self,+        cmd: str,+        *args: Iterable[str],

Hello, it's your typechecker speaking. I'd suggest *args: str here, see https://www.python.org/dev/peps/pep-0484/#arbitrary-argument-lists-and-default-argument-values

zakaprov

comment created time in a month

PullRequestReviewEvent

Pull request review commentgolemfactory/yapapi

Decouple scripts from WorkContext

+"""Stuff."""++import asyncio+from datetime import timedelta+from typing import Awaitable, Optional, List, Tuple, TYPE_CHECKING++from yapapi.events import CommandExecuted+from yapapi.script.command import *++if TYPE_CHECKING:+    from yapapi.ctx import WorkContext+++class Script:+    """Stuff."""++    timeout: Optional[timedelta] = None+    """Time after which this script's execution should be forcefully interrupted."""++    wait_for_results: bool = True+    """Stuff."""++    def __init__(self, context: "WorkContext"):+        self._ctx: "WorkContext" = context+        self._commands: List[Tuple[Command, asyncio.Future]] = []++    def _evaluate(self) -> List[BatchCommand]:+        """Evaluate and serialize this script to a list of batch commands."""+        batch: List[BatchCommand] = []+        for cmd, _future in self._commands:+            batch.append(cmd.evaluate(self._ctx))+        return batch++    async def _after(self):+        """Hook which is executed after the script has been run on the provider."""+        for cmd, _future in self._commands:+            await cmd.after(self._ctx)++    async def _before(self):+        """Hook which is executed before the script is evaluated and sent to the provider."""+        if not self._ctx._started and self._ctx._implicit_init:+            loop = asyncio.get_event_loop()+            self._commands.insert(0, (Deploy(), loop.create_future()))+            self._commands.insert(1, (Start(), loop.create_future()))++        for cmd, _future in self._commands:+            await cmd.before(self._ctx)++    def _set_cmd_result(self, result: CommandExecuted) -> None:+        cmd = self._commands[result.cmd_idx]+        cmd[1].set_result(result)+        if isinstance(cmd, Start):+            self._ctx._started = True++    def add(self, cmd: Command) -> Awaitable[CommandExecuted]:+        loop = asyncio.get_event_loop()+        future_result = loop.create_future()+        self._commands.append((cmd, future_result))+        return future_result++    def deploy(self) -> Awaitable[CommandExecuted]:+        """Schedule a Deploy command on the provider."""+        return self.add(Deploy())++    def start(self, *args: str) -> Awaitable[CommandExecuted]:+        """Schedule a Start command on the provider."""+        return self.add(Start(*args))++    def terminate(self) -> Awaitable[CommandExecuted]:+        """Schedule a Terminate command on the provider."""+        return self.add(Terminate())++    def send_json(self, data: dict, dst_path: str) -> Awaitable[CommandExecuted]:+        """Schedule sending JSON data to the provider.++        :param data: dictionary representing JSON data to send+        :param dst_path: remote (provider) destination path+        """+        return self.add(SendJson(data, dst_path))++    def send_bytes(self, data: bytes, dst_path: str) -> Awaitable[CommandExecuted]:+        """Schedule sending bytes data to the provider.++        :param data: bytes to send+        :param dst_path: remote (provider) destination path+        """+        return self.add(SendBytes(data, dst_path))++    def send_file(self, src_path: str, dst_path: str) -> Awaitable[CommandExecuted]:+        """Schedule sending a file to the provider.++        :param src_path: local (requestor) source path+        :param dst_path: remote (provider) destination path+        """+        return self.add(SendFile(src_path, dst_path))++    def run(+        self,+        cmd: str,+        *args: Iterable[str],

It's better to annotate *args with just str, see https://www.python.org/dev/peps/pep-0484/#arbitrary-argument-lists-and-default-argument-values

zakaprov

comment created time in a month

PullRequestReviewEvent

Pull request review commentgolemfactory/yapapi

Decouple scripts from WorkContext

+"""Stuff."""++import asyncio+from datetime import timedelta+from typing import Awaitable, Optional, List, Tuple, TYPE_CHECKING++from yapapi.events import CommandExecuted+from yapapi.script.command import *++if TYPE_CHECKING:+    from yapapi.ctx import WorkContext+++class Script:+    """Stuff."""++    timeout: Optional[timedelta] = None+    """Time after which this script's execution should be forcefully interrupted."""++    wait_for_results: bool = True+    """Stuff."""++    def __init__(self, context: "WorkContext"):+        self._ctx: "WorkContext" = context+        self._commands: List[Tuple[Command, asyncio.Future]] = []++    def _evaluate(self) -> List[BatchCommand]:+        """Evaluate and serialize this script to a list of batch commands."""+        batch: List[BatchCommand] = []+        for cmd, _future in self._commands:+            batch.append(cmd.evaluate(self._ctx))+        return batch++    async def _after(self):+        """Hook which is executed after the script has been run on the provider."""+        for cmd, _future in self._commands:+            await cmd.after(self._ctx)++    async def _before(self):+        """Hook which is executed before the script is evaluated and sent to the provider."""+        if not self._ctx._started and self._ctx._implicit_init:+            loop = asyncio.get_event_loop()+            self._commands.insert(0, (Deploy(), loop.create_future()))+            self._commands.insert(1, (Start(), loop.create_future()))++        for cmd, _future in self._commands:+            await cmd.before(self._ctx)++    def _set_cmd_result(self, result: CommandExecuted) -> None:+        cmd = self._commands[result.cmd_idx]+        cmd[1].set_result(result)+        if isinstance(cmd, Start):+            self._ctx._started = True

cmd is a tuple so it cannot be an instance of Start.

zakaprov

comment created time in a month

PullRequestReviewEvent

Pull request review commentgolemfactory/yapapi

Johny b/561 non context manager

 def subnet_tag(self) -> Optional[str]:         """Return the name of the subnet used by this engine, or `None` if it is not set."""         return self._subnet +    @property+    def operative(self) -> bool:+        return self._operative+     def emit(self, event: events.Event) -> None:         """Emit an event to be consumed by this engine's event consumer."""         if self._wrapped_consumer:             self._wrapped_consumer.async_call(event)      async def __aenter__(self) -> "_Engine":-        """Initialize resources and start background services used by this engine."""-         try:-            stack = self._stack+            await self._start()+            return self+        except:+            await self._stop(*sys.exc_info())+            raise -            await stack.enter_async_context(self._wrapped_consumer)+    async def __aexit__(self, *exc_info) -> Optional[bool]:+        return await self._stop(*exc_info) -            def report_shutdown(*exc_info):-                if any(item for item in exc_info):-                    self.emit(events.ShutdownFinished(exc_info=exc_info))  # noqa-                else:-                    self.emit(events.ShutdownFinished())+    async def _stop(self, *exc_info) -> Optional[bool]:+        self._operative = False+        return await self._stack.__aexit__(*exc_info) -            stack.push(report_shutdown)+    async def _start(self) -> None:+        self._operative = True

Perhaps stop() should differ in that self._stopped = True goes before await self.stop(). And there's the complication with the exc_info tuple being passed around from __aexit__().

johny-b

comment created time in a month

PullRequestReviewEvent

Pull request review commentgolemfactory/yapapi

Johny b/561 non context manager

 def subnet_tag(self) -> Optional[str]:         """Return the name of the subnet used by this engine, or `None` if it is not set."""         return self._subnet +    @property+    def operative(self) -> bool:+        return self._operative+     def emit(self, event: events.Event) -> None:         """Emit an event to be consumed by this engine's event consumer."""         if self._wrapped_consumer:             self._wrapped_consumer.async_call(event)      async def __aenter__(self) -> "_Engine":-        """Initialize resources and start background services used by this engine."""-         try:-            stack = self._stack+            await self._start()+            return self+        except:+            await self._stop(*sys.exc_info())+            raise -            await stack.enter_async_context(self._wrapped_consumer)+    async def __aexit__(self, *exc_info) -> Optional[bool]:+        return await self._stop(*exc_info) -            def report_shutdown(*exc_info):-                if any(item for item in exc_info):-                    self.emit(events.ShutdownFinished(exc_info=exc_info))  # noqa-                else:-                    self.emit(events.ShutdownFinished())+    async def _stop(self, *exc_info) -> Optional[bool]:+        self._operative = False+        return await self._stack.__aexit__(*exc_info) -            stack.push(report_shutdown)+    async def _start(self) -> None:+        self._operative = True

Additionally, if you put start() and stop() in _Engine instead of Golem, you could define _Engine's __aenter__() and __aexit__() in terms of start() and stop(). That would also protect us from concurrent executions of async with golem:... in two different tasks.

johny-b

comment created time in a month

PullRequestReviewEvent

Pull request review commentgolemfactory/yapapi

Johny b/561 non context manager

 def subnet_tag(self) -> Optional[str]:         """Return the name of the subnet used by this engine, or `None` if it is not set."""         return self._subnet +    @property+    def operative(self) -> bool:+        return self._operative+     def emit(self, event: events.Event) -> None:         """Emit an event to be consumed by this engine's event consumer."""         if self._wrapped_consumer:             self._wrapped_consumer.async_call(event)      async def __aenter__(self) -> "_Engine":-        """Initialize resources and start background services used by this engine."""-         try:-            stack = self._stack+            await self._start()+            return self+        except:+            await self._stop(*sys.exc_info())+            raise -            await stack.enter_async_context(self._wrapped_consumer)+    async def __aexit__(self, *exc_info) -> Optional[bool]:+        return await self._stop(*exc_info) -            def report_shutdown(*exc_info):-                if any(item for item in exc_info):-                    self.emit(events.ShutdownFinished(exc_info=exc_info))  # noqa-                else:-                    self.emit(events.ShutdownFinished())+    async def _stop(self, *exc_info) -> Optional[bool]:+        self._operative = False+        return await self._stack.__aexit__(*exc_info) -            stack.push(report_shutdown)+    async def _start(self) -> None:+        self._operative = True

Thanks. Now it probably works well, but the solution that uses asyncio.Futures seems a bit complicated, at least to me. Why not something more straightforward, with asyncio.Lock:


def __init__():
    self._started = False
    self._stopped = False
    self._lock = asyncio.Lock()

async def start():
    async with self._lock:
       if not self._started:
           await self._start()
           self._started = True

# Similarly for stop()

@property
def operative():
    return self._started and not self._stopped
johny-b

comment created time in a month

PullRequestReviewEvent

delete branch golemfactory/goth

delete branch : az/disconnect-yagna-containers

delete time in a month

issue closedgolemfactory/goth

Disconnect containers before stopping Docker compose network

The method ComposeNetworkManager.stop_network() is used to stop the compose network at test shutdown and startup (to make sure a new network is created from scratch): https://github.com/golemfactory/goth/blob/5a68c06f07fe87dd72058e1d91fed0d2c68fba9f/goth/runner/container/compose.py#L148-L156

The command docker-compose down this method executes sometimes fail, since, for unknown reason, some yagna containers (already removed!) are still reported as attached to the network. This failure causes the whole test case to fail.

In yapapi and yagna CI pipelines we work around this issue by adding cleanup steps that disconnect the containers from the network using docker network commands (see https://github.com/golemfactory/yapapi/pull/320) or even restart the Docker daemon (see https://github.com/golemfactory/yapapi/pull/553). But when using goth locally having to perform this step manually is an inconvenience. Perhaps we could perform the cleanup step using Docker SDK in ComposeNetworkManager.stop_network().

closed time in a month

azawlocki

push eventgolemfactory/goth

azawlocki

commit sha b145b7a2aa8647713848a6d71f71b785a14f7990

Explicitly disconnect yagna containers before stopping Docker network (#540) * Explicitly disconnect yagna containers before stopping Docker network * Update black to 21.7b0 * Update link to an issue in a docstring for stop_network()

view details

push time in a month

PR merged golemfactory/goth

Explicitly disconnect yagna containers before stopping Docker network

Resolves #539

This PR adds the step that explicitly disconnects containers from the docker_default network, using Python Docker SDK. It is equivalent to executing the command

docker network disconnect -f docker_default container

for each container connected to docker_default.

This new step is performed as part of ComposeNetworkManager.stop_network(), before running docker-compose down.

+70 -21

1 comment

3 changed files

azawlocki

pr closed time in a month

pull request commentgolemfactory/goth

Explicitly disconnect yagna containers before stopping Docker network

Nice! Does this mean we can remove the cleanup steps from integration.yml workflow in goth repo?

That's possible

azawlocki

comment created time in a month

push eventgolemfactory/goth

azawlocki

commit sha d8e005505f044944a1d5d2d4b62543b837342124

Update link to an issue in a docstring for stop_network()

view details

push time in a month

push eventgolemfactory/goth

azawlocki

commit sha 718658ae030f18e16fbdf06007d6271e82743219

Update black to 21.7b0

view details

push time in a month

create barnchgolemfactory/goth

branch : az/disconnect-yagna-containers

created branch time in a month