profile
viewpoint

ronf/asyncssh 928

AsyncSSH is a Python package which provides an asynchronous client and server implementation of the SSHv2 protocol on top of the Python asyncio framework. It requires Python 3.4 or later and the Python cryptography library for some cryptographic functions.

minddrive/random_math 0

A collection of random modules and programs for math puzzles and problems

ronf/asyncio 0

This project is the asyncio module for Python 3.3. Since Python 3.4, asyncio is part of the standard library.

ronf/paramiko 0

Debian packaging for Paramiko Python module

ronf/ssh-comparison 0

Comparison of various SSH implementations in terms of supported crypto protocols

issue commentpyca/cryptography

EllipticCurvePublicKey can't be serialized to raw format

I can confirm in AsyncSSH that I only use the "Raw" encoding for ED keys. For EC keys, all I need is the X962 format.

DurandA

comment created time in 3 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

Yes, it would - it's all about how long the non-killed transfers would take to finish.

I think you're right that we can consider this resolved. I really appreciate all your help putting together the test case for it. Without that, the fix would have been very difficult to find!

I generally leave issues open until the fix actually makes it into a release, as a reminder to post the version the fix goes into, but I don't think there's anything further needed for now. Thanks again!

termim

comment created time in 6 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

The times were very close together in my tests:

2020-03-28 17:29:05,502: Tasks Number1: 1 2020-03-28 17:29:05,948: Tasks Number2: 1 2020-03-28 17:29:07,085: Tasks Number3: 1153 2020-03-28 17:29:22,131: Tasks Number4: 2 2020-03-28 17:29:22,131: CANCEL: Task-11 2020-03-28 17:29:22,131: CANCEL: Task-142 2020-03-28 17:29:22,131: CANCEL: Task-396 2020-03-28 17:29:22,131: CANCEL: Task-707 2020-03-28 17:29:22,131: CANCEL: Task-886 2020-03-28 17:29:22,131: CANCEL: Task-1350 2020-03-28 17:29:22,131: CANCEL: Task-1737 2020-03-28 17:29:22,131: CANCEL: Task-1929 2020-03-28 17:29:22,131: CANCEL: Task-2248 2020-03-28 17:29:22,131: CANCEL: Task-2787 2020-03-28 17:29:22,131: Tasks Number5: 1 2020-03-28 17:29:22,132: Tasks Number6: 1

It took a couple of seconds to finish the initial SSH handshake, but after that there was the expected 15 second delay between Number3 and Number4 and almost no time at all after that for the cancellation or the awaiting of the other tasks to finish. That was when I killed ALL of the sftp server processes, though. When I left some of those sftp-server processes alive, it took a bit longer between Number5 and Number6:

2020-03-29 20:26:18,796: Tasks Number5: 991 2020-03-29 20:26:19,232: Tasks Number6: 9

I imagine this amount of time would depend on how long the remaining file transfers took to finish. So, if you were downloading over a slow link, that could explain the long delay.

termim

comment created time in 7 days

issue commentronf/asyncssh

SCPError vs SFTPError

Ok - this should be corrected in commit abc87f7 in the "develop" branch.

While it was never intended to be public, I never really liked the way SCPError showed up in debugging output, as that could be confusing. So, as part of this change, I decided to eliminate SCPError entirely and directly create one of the public SFTPError subclasses when raising SCP errors. This also had the benefit of my not having to deal with multiple inheritance. The SCP code already had to cope with exceptions not always having the extra "fatal" and "suppress_send" members, since they weren't always of type SCPError. Now, I've just eliminated that type and add these members directly to SFTPError instances in cases where they're needed.

termim

comment created time in 7 days

push eventronf/asyncssh

Ron Frederick

commit sha abc87f7c619b52519e2986452d949241f114a86d

Fix SCP to use new SFTPError subclasses This commit enhances SCP to use the new subclasses of SFTPError introduced in a prior commit. Previously, SCP errors raised in the SFTP classes would have returned these subclasses, but some of the errors raised within the SCP implementation itself were still returning the parent SFTPError class. This has now been corrected. As part of this change, the internal "SCPError" subclass of SFTPError has been eliminated completely. This class was not part of the public AsyncSSH API and applications should not have been using it, so this is not a compatibility break. Application code should only have been using SFTPError (or its new subclasses). This should reduce confusion in debugging output, where the internal SCPError class was sometimes visible. Now, exceptions will always show up as one of the public SFTPError subclasses.

view details

push time in 7 days

issue commentronf/asyncssh

SCPError vs SFTPError

No - this is intentional. The SCPError class is intended to be internal to the scp module. It adds some additional members that are needed to properly return errors in the form needed by the SCP wire protocol. However, SCPError is a subclass of SFTPError, and that's what application code is expected to catch. In the case of the server-side implementation, SCP and SFTP actually both rely on the same SFTPServer object, which should only ever raise SFTPError, not SCPError.

That said, I failed to update the SCP implementation to properly take into account the recent changes I made which introduced subclasses of SFTPError that applications can catch. I'll remedy that before I put out the next release.

termim

comment created time in 7 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

For what it's worth, I tried out your new test code and so far I haven't been able to trigger unretrieved exceptions there, on either Windows or Mac clients.

termim

comment created time in 8 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

It's good to hear that you aren't seeing the PermissionDenied error after a combination of my changes and retrieving all the exceptions. You may be right that my changes alone (without changing the test code to retrieve all the exceptions) would have been enough to prevent the Permission Denied error. I was initially thinking that even though you performed the "cancel" operation in one of the tasks, that cancellation might have still been in progress on some of the tasks at the point where the first exception was reported, but I guess that would have only been an issue if you used something like the as_completed() iterator or wait() with FIRST_COMPLETED or FIRST_EXCEPTION. With ALL_COMPLETED, I guess it should block until the cancellations have all happened (or the tasks completed for other reasons) regardless of whether you retrieved the other exceptions, giving the "finally" block which calls cleanup() a chance to run. However, not retrieving the other exceptions means getting all those tracebacks, making it hard to see the other log output. So, it's worthwhile to do that regardless.

As for the large number of tasks, that actually makes sense -- each parallel I/O operation (which defaults to a 16 KB block size) is scheduled as a separate task. So, you'd have nearly 1,000 tasks per file transfer if your file was 15 MB in size, or close to 10,000 tasks total if the transfers fully completed. If the cancellation happened prior to the full file being transferred, the numbers would be lower, but still significant. You can increase the block size, but some SFTP servers have fairly low undocumented limits on the amount a single read() call can return, so it's better to keep the block size pretty low. When I last looked, OpenSSH limited reads to no more than 64 KB, and actually failed if the total SFTP message size exceeded 256 KB. See the note I added in https://asyncssh.readthedocs.io/en/latest/api.html#asyncssh.SFTPClient.open.

As for the deeply nested stack, I haven't figured that one out yet, nor can I figure out why there could still be unretrieved exceptions reported with the new test code (and with a similar change I made here yesterday to the test code). I could see it happening if there was a new exception raised in main() inside one of the except blocks, but in my testing yesterday that wasn't what I saw. In fact, I added extra exception blocks in main() to specifically look for that, and I still saw tracebacks showing unretrieved exceptions.

When you were looking at the time it takes to cancel, did you take into account that the cancel_it() task itself was one of the things being waited for, and that it wouldn't even start for 15 seconds due to the sleep at the top?

termim

comment created time in 8 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

Something else I noticed about your test code is that it tries to clean up all of the file objects as soon as it hits the first exception. This could be a problem even after the fix I put in here, as some of those other transfers might still be in the process of being canceled. You probably should wait for the result to come back from all the tasks before you try to clean up the files they might have open, or only clean up the files associated with each task individually after retrieving that task's result.

termim

comment created time in 9 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

I've checked in a potential fix for this in commit b6e68c9 in the "develop" branch. I also did some other cleanup around handling of SFTP exceptions in a previous commit. Could you try this version and see if it fixes the issue you're seeing?

termim

comment created time in 9 days

push eventronf/asyncssh

Ron Frederick

commit sha 27af34828086005f54ff4930f35881ee40cd342f

Clean up SFTP exception handling This commit cleans up SFTP exception handling to avoid reporting nested exceptions in a few places. It also takes advantage of the new SFTPEOFError exception in one place that was missed in the previous round of changes for that.

view details

Ron Frederick

commit sha b6e68c9cb789ef298ed7157831045779dd34b35f

Fix a potential issue in SFTP file copy cleanup This commit makes sure that the SFTP file copier class properly attempts to close both the source and destination file objects, even if it gets an error during one of those closes.

view details

push time in 9 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

Ok - I think I may have a lead here. I was finally able to reproduce the problem, and it appears that when it happens the cleanup function attempts to close the network file object (the source), but never attempts to close the local file object (the destination), leaving that file open when your code later tries to unlink the file.

If an exception occurred the network close operation, I could see this happening as it would exit out of the cleanup() function before completing things. I haven't actually been able to confirm that an exception is being thrown in that case as I can't reproduce it reliably enough to see that yet, but I think this is a possible explanation. If that's the case, using nested "finally" blocks might be one way to address this.

termim

comment created time in 9 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

Looking at the test code more closely here, I can see why it sometimes reports some task exceptions were not retrieved -- it basically stops on the first exception, and never finishes calling f.result() on the rest. I don't fully understand why the stack is so deep in that case, but fiddling with the loop allowed me to avoid all that output and better see what's going on. I still haven't been able to reproduce the Permission Denied, though.

termim

comment created time in 9 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

Thanks for providing the test code!

Unfortunately, I haven't been able to reproduce the problem here so far. I tried a 16 MB file here and I didn't see the cancellation at all, despite raising the sleep time to 20 seconds to give myself a chance to kill some of the server processes. I have a feeling the transfers all finished before the cancellations began, but the Python process was still waiting for the sleep to finish in cancel_it().

With a larger file, I did manage to get it to report back the 'Connection closed' (or sometimes 'Connection not open'), but I wasn't able to reproduce the permission denied error, even when running on Windows 10. Also, the connection closed error showed up very oddly. It was an unretrieved exception, and it had a HUGE traceback that seemed to involve a bunch of nested calls to _make_request, which I wouldn't expect at all.

termim

comment created time in 9 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

There are various network and file-related errors that could have happened in some cases, and depending on whether those exceptions were converted to an SFTPError exception or not, those could have triggered the "object has no attribute 'code'" error you reported. However. all of those should be gone now with this fix.

Regarding the issue where deleting a file could potentially report back that the file is still open, there are multiple parallel tasks doing I/O on the file, and while _SFTPParallelIO does call cancel() on all of the outstanding tasks before actually raising an exception to the calling code, the current code doesn't actually reap all the canceled tasks before doing so. It counts on the _reap_task() method in the SSHConnection to do this.

That said, there is a "finally" block in _SFTPParallelIO.run() that calls cleanup() automatically when any parallel I/O finishes, and _SFTPFileCopier.cleanup() makes sure that it closes the src & dst files in cleanup() even when the exception is raised. So, even though there might be some tasks left to reap, the file objects should be properly closed in the copy case before the exception is propagated to the caller.

Are you seeing this problem on file copies, or on large parallel reads or writes?

termim

comment created time in 10 days

issue commentronf/asyncssh

I'm trying to determine the reason for Code 10, Connection lost issues

If the source of the error is related to MaxStartups, you'd actually be better off using an asyncio.Semaphore() to limit the number of simultaneous calls to asyncssh.connect() rather than overwhelming the server and having it react by failing some of the connections. It's actually quite simple to do this. You'd just create a Semaphore() with a limit of the max number of simultaneous connects you want to allow and grab the semaphore while you are calling asyncssh.connect(). For instance:

class MySSHClient:
    def __init__(self, max_connects):
        self._semaphore = asyncio.Semaphore(max_connects)

    async def connect(self, host):
        async with self._semaphore:
            return await asyncssh.connect(host)
        
    async def run_command(self, host, command):
        conn = await self.connect(host)

        async with conn:
            return await conn.run(command)

async def run():
    client = MySSHClient(5)
    command = 'echo "hello world"'
    
    tasks = [client.run_command(host, command) for host in 100*['localhost']]

    for task in asyncio.as_completed(tasks):
        result = await task

        if isinstance(result, Exception):
            print(f'Task failed: {result}')
            print(result.__dict__)
        elif result.exit_status != 0:
            print(f'Task exited with status {result.exit_status}')
            print(result.stderr, end='')
        else:
            print(f'Task succeeded:')
            print(result.stdout, end='')

Basically, the only change is to initialize a _semaphore member in MySSHClient and then to do an "async with" on that in a wrapper around asyncssh.connect().

Note that this allows more than 5 SSH connections to be open at once, but it limits things such that no more than 5 connections can be performing the SSH handshake at once, avoiding hitting the server's MaxStartups limit.

This should be much friendlier to the server than opening a large number of connections all at once and having the server fail some of them, which then causes the client to open even more connections when it retries. Here, it should pace the connections to make sure that none of them fail in the first place.

oferchen

comment created time in 11 days

issue commentronf/asyncssh

I'm trying to determine the reason for Code 10, Connection lost issues

The limit is on the number of simultaneous sessions which haven't finished authenticating yet. So, retrying would be enough to fix things in some cases, if other connections you had open managed to finish authenticating before you retried.

oferchen

comment created time in 11 days

issue commentronf/asyncssh

Connection pool? Use connection multiple times out of context creation

If you are calling exit() yourself, all you'd need to do is remove the corresponding SFTPClient from the sftp_clients dictionary. However, if the connection is closed by the SFTP server, it's a bit trickier.

If you try to use an existing SFTPClient object previously opened on that connection, it will most likely return an error of either FX_NO_CONNECTION if it has already detected the connection close or FX_CONNECTION_LOST If the close happens while processing a new request. One option would be to retry the request in these cases. For instance:

class SFTP:
    sftp_clients = {}

    @classmethod
    async def connect(cls, credentials):
        host = credentials['host']
        sftp_client = cls.sftp_clients.get(host)

        if not sftp_client:
            conn = await asyncssh.connect(**credentials)
            sftp_client = await conn.start_sftp_client()
            cls.sftp_clients[host] = sftp_client

        return sftp_client

    @classmethod
    async def get_file(cls, remote_path, credentials):
        attempts = 0 

        while attempts < 3:
            sftp_client = await cls.connect(credentials)

            try:
                async with sftp_client.open(remote_path, encoding=None) as f:
                    return await f.read()
            except (asyncssh.SFTPNoConnection, asyncssh.SFTPConnectionLost):
                cls.sftp_clients.pop(credentials['host'], None)
                attempts += 1
            except asyncssh.SFTPError:
                return None

The connect() method is unchanged from above, but there's now some new logic to look for the SFTPNoConnection and SFTPConnectionLost exceptions. These exceptions are only available in the latest "develop" commit from earlier today, but you could get a similar result inside the "except asyncssh.SFTPError" if you test exc.code for FX_NO_CONNECTION or FX_CONNECTION_LOST and then do a similar thing to remove the sftp_client from sftp_clients and retry.

shsimeonova

comment created time in 12 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

Ok - SFTPError subclass support is now available in the "develop" branch, and I have checked in an equivalent to the above fix on top of that in commit f3d7c0f. Thanks again!

termim

comment created time in 12 days

push eventronf/asyncssh

Ron Frederick

commit sha 76ba80367130fac4ea1f126d8a28a733fa8f79c7

Correct doc comment

view details

Ron Frederick

commit sha 419865e864ba740ee482eb218597b3a254dff701

Create subclasses for SFTPError exceptions This commit creates a subclass for each of the possible SFTPError exception codes, allowing applications to easily catch select SFTP errors rather than having to catch all SFTP errors and then test the error code in the "except" block.

view details

Ron Frederick

commit sha f3d7c0f3cfc401ed2baaf9c78fb08769b8e677ba

Fix exception handling in SFTP parallel I/O class This commit fixes a problem in the exception handling in the SFTP Parallel I/O class where a low-level connection failure was not being handled correctly, looking for a "code" member of the exception object which is only present for higher-level SFTP errors. Thanks go to Mikhail Terekhov for reporting this!

view details

push time in 12 days

issue commentronf/asyncssh

ConnectionResetError in sftp client

Thanks for the report! That function tests exc.code without first checking if the exception is of type SFTPError. So, if other lower-level failures occur, it could trigger the error you saw.

The following should be a quick fix:

diff --git a/asyncssh/sftp.py b/asyncssh/sftp.py
index 36de49c..9fc5d92 100644
--- a/asyncssh/sftp.py
+++ b/asyncssh/sftp.py
@@ -433,7 +433,8 @@ class _SFTPParallelIO:
                 for task in done:
                     exc = task.exception()
 
-                    if exc and exc.code != FX_EOF:
+                    if exc and not (isinstance(exc, SFTPError) and
+                                    exc.code == FX_EOF):
                         exceptions.append(exc)
 
                 if exceptions:

However, this reminds me that I had been meaning to create subclasses of SFTPError in a manner similar to what I did some time ago for subclasses of DisconnectError, to make it easier to actually provide more specialized exception handling depending on the specific exception code. This seems like a good excuse to do that, as it'll make the above fix a bit cleaner. I'll get that version of the change checked into the "develop" branch soon.

termim

comment created time in 12 days

issue commentronf/asyncssh

asyncssh.sftp.SFTPError: Failure on open() with pflags_or_mode=FXF_CREAT

Yes - open() is what you want here.

You can read & write using bytes objects, but only if you specify encoding=None as an argument in the call to open(). Otherwise, it defaults to UTF-8 and expects you to be passing in string data, similar to write() calls elsewhere in AsyncSSH.

If you use a mode string instead of the SFTP flags, you can add 'b' to the mode to get the same result, just as you would for a local file open() call. For instance, opening with a mode of 'wb' will open the file for writing in binary mode, creating it if necessary. Take a look at https://asyncssh.readthedocs.io/en/latest/api.html#asyncssh.SFTPClient.open for more details of the available open modes and their corresponding flags.

As for the first error you got, I think it is because you can't specify FXF_CREAT by itself. It should be added to FXF_WRITE. Again, see the above link for the combinations you'll probably want to use if you decide to directly specify pflags.

shsimeonova

comment created time in 14 days

issue commentronf/asyncssh

I'm trying to determine the reason for Code 10, Connection lost issues

Sorry for the slow response. The e-mail notification about this was incorrectly flagged as spam. Are you still running into this issue?

There are limits on most SSH servers on how many connections you can open at once. With OpenSSH, I think the default is no more than 10 connections which can be going through SSH authentication in parallel. So, if you want more connections than that, you have to space them out to allow the authentication to complete on existing connections before opening additional ones. You can raise this limit if you have access to the server's config -- I think the config setting is named "MaxStartups". There's also a limit on the number of allowed sessions on a single connection, controlled by "MaxSessions", but you seem to be opening a new connection each time here, so the latter setting shouldn't be an issue.

You may want to enable debugging to see exactly how far it gets before it fails, but if you're seeing a mix of successes and failures when opening many connections in parallel and it is reporting back "Connection lost", I think the "MaxStartups" is most likely your issue.

To enable debug logging, you'd do something like:

    import logging
    logging.basicConfig(level='DEBUG')
    asyncssh.set_debug_level(2)
oferchen

comment created time in 15 days

issue commentronf/asyncssh

Connection pool? Use connection multiple times out of context creation

You're close here. The problem is mainly in your use of "async with" in the connect() function. That closes the connection as soon as it exits the block, so you can't return "conn" there and expect to be able to use it in the calling function. You need to just do a plain assignment on the call to asyncssh.connect().

I rewrote your connect() function to build in the logic to look for already-open connections, and added some "async with" calls to clean up the SFTP client and SFTP file objects. I also adjusted your check for IOError to look for asyncssh.SFTPError instead, and also adjusted the error path to return None instead of False (purely a personal preference for how to handle "optional" return values. The result looks as follows:

class SFTP:
    open_connections = {}

    @classmethod
    async def connect(cls, credentials):
        host = credentials['host']
        conn = cls.open_connections.get(host)

        if not conn:
            conn = await asyncssh.connect(**credentials)
            cls.open_connections[host] = conn

        return conn

    @classmethod
    async def get_file(cls, remote_path, credentials):
        conn = await cls.connect(credentials)

        async with conn.start_sftp_client() as sftp_client:
            try:
                async with sftp_client.open(remote_path, encoding=None) as f:
                    return await f.read()
            except asyncssh.SFTPError:
                return None

If you wanted to, you could actually reuse the SFTP client objects rather than just the connection objects, if you're not trying to do multiple file transfers in parallel. The approach would be similar, but you'd move the start_sftp_client() call into the connect() function and have it save those objects in the dictionary instead of the connection objects. This would look something like:

class SFTP:
    sftp_clients = {}

    @classmethod
    async def connect(cls, credentials):
        host = credentials['host']
        sftp_client = cls.sftp_clients.get(host)

        if not sftp_client:
            conn = await asyncssh.connect(**credentials)
            sftp_client = await conn.start_sftp_client()
            cls.sftp_clients[host] = sftp_client

        return sftp_client

    @classmethod
    async def get_file(cls, remote_path, credentials):
        sftp_client = await cls.connect(credentials)

        try:
            async with sftp_client.open(remote_path, encoding=None) as f:
                return await f.read()
        except asyncssh.SFTPError:
            return None
shsimeonova

comment created time in 15 days

pull request commentronf/asyncssh

handle `SSHServer.server_requested` Bool returns

Thanks very much for contributing this, and confirming the revised change. This will be included in the next release!

tommyvn

comment created time in 21 days

issue commentronf/asyncssh

SSH command send to Cisco ASA hangs on command execution

I'm glad to hear that wait_for() worked. I agree that readuntil() is probably a better approach, though, as it doesn't requiring trying to guess a timeout value that's long enough for all the output to be collected, and readuntil() will return more quickly. The only caveat is that you need to make sure the commands you are running will never have the prompt string you are looking for appearing in their output.

settlej

comment created time in 22 days

pull request commentronf/asyncssh

handle `SSHServer.server_requested` Bool returns

Thanks very much for your submission, Tom!

I've gone ahead and checked your proposed changes with a few minor adjustments into the "develop" branch as commit 17fd581. I moved the position of the try..except block that was looking for OSError, edited the documentation slightly, and fixed some issues with parentheses and line length, but this should preserve the functionality you added to properly handle True and False being returned from an async version of the server_requested() method.

Also, thanks very much for adding unit tests for the async version of server_requested() as part of your change!

If you get a chance, could you give the "develop" branch a try to confirm everything is working as you expect?

tommyvn

comment created time in 22 days

push eventronf/asyncssh

Tom van Neerijnen

commit sha 17fd5814a057a0326fa6f7c1a1ce8f7cc447674b

Handle `SSHServer.server_requested` coroutine returning booleans `SSHServer.server_requested` is documented to allow a coroutine return, but prior to this commit a return of False from the coroutine bypassed the code that handles the synchronous version. Further, adding `True` to the async flow proved trivial and so that is done too.

view details

push time in 22 days

issue commentronf/asyncssh

SSH command send to Cisco ASA hangs on command execution

After thinking about this a bit, I decided this seems like a common enough use case that it might be useful to build the timeout capability into the SSHClientProcess.wait() and SSHClientConnection.run() method. I created a new asyncssh.TimeoutError class which is a subclass of both asyncssh.ProcessError and asyncio.TimeoutError, and added an optional timeout argument you can use when waiting for processes to finish. This is now available in the "develop" branch as commit 5179ffa. With this change, the above example can be simplified down to something like:

    async with asyncssh.connect('localhost') as conn:
        try:
            result = await conn.run('show inventory', timeout=1)
        except asyncssh.TimeoutError as exc:
            result = exc

SSHCompletedProcess and ProcessError are different classes, but they provide access to the same set of member variables, so the TimeoutError exception (a subclass of ProcessError) can actually be used in place of the SSHCompletedProcess result when the timeout fires. However, in a real-world example, you'd probably want to do slightly different things in the "except" block than you would when the call succeeds, and so you'd probably end up with something like a try..except..else block.

settlej

comment created time in 22 days

push eventronf/asyncssh

Ron Frederick

commit sha 5179ffa59b95258d48de9a0fbe171335b45ad6eb

Add timeout to SSHClientProcess.wait and SSHClientConnection.run This commit adds an optional timeout argument in the SSHClientProcess.wait() and SSHClientConnection.run() methods, allowing the caller to specify the maximum time for the process to exit. If this time is exceeded, a TimeoutError exception is raised which contains the process information, including any output on stdout and stderr received before the timeout occurred.

view details

push time in 22 days

issue commentronf/asyncssh

"open failed" SSH exception error

This isn't really much to go on, and I don't see anything in AsyncSSH which outputs "open failed" in an exception. The closest I can find is a debug message which looks like:

            self.logger.debug1('Open failed for channel type %s: %s',
                               chantype, exc.reason)

This happens when a ChannelOpenError is raised, but the actual reason for the failure should appear in this message.

babuloseo

comment created time in 23 days

issue commentronf/asyncssh

SSH command send to Cisco ASA hangs on command execution

Yes - you can use asyncio.wait_for() to set a timeout. It would look something like:

    async with asyncssh.connect('localhost') as conn:
        proc = await conn.create_process('show inventory')

        try:
            await asyncio.wait_for(proc.wait_closed(), timeout=1)
        except asyncio.TimeoutError:
            pass

        stdout, stderr = proc.collect_output()

One thing I noticed as I wrote this is that you have a '\n' after 'show inventory' in your example. You shouldn't need that when you pass in a command to run. If you remove that, do you still run into the issue of the ASA not closing the connection after running the command?

The only time you should need to put a '\n' like that is if you open a channel without passing in a command and then you later write the command to the channel yourself or make the command part of the data you provide via stdin.

settlej

comment created time in 24 days

issue commentronf/asyncssh

SSH command send to Cisco ASA hangs on command execution

From your trace, it would appear that the Cisco ASA is not actually sending EOF or closing the channel after running your command. Instead, it is outputting a prompt and probably waiting for additional command input. I wouldn't expect that, since you are passing the command to run in the call to run(). On most SSH servers, that runs the command you specify and then closes the channel when the command completes. Since that's not happening here, though, you are waiting for additional output and the ASA is waiting for you to provide additional input.

To work around this, you could perhaps try adding stdin=asyncssh.DEVNULL to the arguments to run(), so you are letting the ASA know you aren't going to provide any more input. Hopefully, this will cause it to close its end of the connection as soon as it is done generating all the output from the command you ran. There might still be an extra prompt showing up in the command's output, but you could strip that off if you wanted to after run() returns.

I'm guessing the 300 seconds here is the inactivity timeout configured on the ASA, so it closes the connection after waiting that long and not getting any additional input.

settlej

comment created time in 24 days

issue commentronf/asyncssh

Passing a tuple of paths to upload over SFTP fails

Thanks very much for reporting this issue! I've checked in a fix in the "develop" branch as commit 88fe0d4 if you'd like to give it a try.

Lyrositor

comment created time in a month

push eventronf/asyncssh

Ron Frederick

commit sha 88fe0d42627a11467a272555d4030307bc128fd1

Fix issue with passing tuple of strings to SFTP copy functions This commit fixes a problem seen when passing a tuple of source files to the SFTP copy functions (get, put, copy, mget, mput, and mcopy). It also refactors that code to reduce duplication. My thanks go to Marc Gagné for reporting this issue and doing some initial analysis!

view details

push time in a month

issue commentronf/asyncssh

can't connect anymore with known_hosts=None after fe757eae9fb1df5002

Great - thanks for the confirmation!

Unfortunately, unless I introduced code which checks the peer's version string to see if it is a Dropbear server and re-orders the host key algs, it'll be difficult to automatically work around this AsyncSSH side. I think explicitly specifying server_host_key_algs might be your best bet in this case.

Alternately, it looks like Dropbear understands ECDSA algorithms and those are earlier in the list. So, you might not see this issue when the Dropbear server is using an EC host key instead of an RSA one.

sanga

comment created time in a month

issue commentronf/asyncssh

Having trouble setting up basic SSH Tunnel

When using the "tunnel" feature (or better yet the connect_ssh() method), you don't need to set up your own port forwarding to port 22 on the remote system. It's actually much simpler than what you're doing here. All you should need is:

async with asyncssh.connect(bastion_host) as bastion:
    async with bastion.connect_ssh(target_host) as conn:
        # Open whatever sessions you need on "conn".

The outer connection is to the bastion host, so you can add whatever parameters you need there in terms of known_hosts and username/password or client_keys to log into the bastion host, and then once that's connected the next line will open an outbound direct TCP/IP connection to the target_host through the bastion connection, so you'd specify the known_hosts and username/password or client_keys for the target_host in the bastion.connect_ssh() call. That takes the same options as asyncssh.connect().

slyduda

comment created time in a month

issue commentronf/asyncssh

can't connect anymore with known_hosts=None after fe757eae9fb1df5002

I dug around a bit in the Dropbear source, and I think I may have found the issue. The buf_match_algo() function has some limits on both the number of proposed algorithms and the maximum name length of each algorithm. While the name length is ok (up to 64 bytes per algo name), the limit on the maximum proposed algorithms is 20 in sysoptions.h:

#define MAX_PROPOSED_ALGO 20

This is later used in but_match_algo() in common-also.c and it looks like anything beyond the first 20 algorithms might not be matched, as they don't fit in the fixed-size array it allocates:

algo_type * buf_match_algo(buffer* buf, algo_type localalgos[],
		enum kexguess2_used *kexguess2, int *goodguess)
{
	char * algolist = NULL;
	const char *remotenames[MAX_PROPOSED_ALGO], *localnames[MAX_PROPOSED_ALGO];
	unsigned int len;
	unsigned int remotecount, localcount, clicount, servcount, i, j;
	algo_type * ret = NULL;
	const char **clinames, **servnames;

	if (goodguess) {
		*goodguess = 0;
	}

	/* get the comma-separated list from the buffer ie "algo1,algo2,algo3" */
	algolist = buf_getstring(buf, &len);
	TRACE(("buf_match_algo: %s", algolist))
	if (len > MAX_PROPOSED_ALGO*(MAX_NAME_LEN+1)) {
		goto out;
	}

	/* remotenames will contain a list of the strings parsed out */
	/* We will have at least one string (even if it's just "") */
	remotenames[0] = algolist;
	remotecount = 1;
	for (i = 0; i < len; i++) {
		if (algolist[i] == '\0') {
			/* someone is trying something strange */
			goto out;
		}
		if (algolist[i] == ',') {
			algolist[i] = '\0';
			remotenames[remotecount] = &algolist[i+1];
			remotecount++;
		}
		if (remotecount >= MAX_PROPOSED_ALGO) {
			break;
		}
	}

        ...
}

The overall buffer length check is ok, because 20*(64+1) is 1300 bytes, which is bigger than what AsyncSSH is sending. However, the full list of algorithms in v2.2.0 is now 22, meaning the 'ssh-rsa' and 'ssh-dss' are beyond the 20 algorithms that Dropbear is willing to match against.

I'm guessing that moving ssh-rsa earlier in the list will help, as would reducing the list of algorithms that AsyncSSH sends.

On the Dropbear side, increasing MAX_PROPOSED_ALGO would also be a pretty simple fix for this.

sanga

comment created time in a month

issue commentronf/asyncssh

can't connect anymore with known_hosts=None after fe757eae9fb1df5002

Thank you for the additional info!

The "No matching also hostkey" is definitely interesting. Looking at the two host key lists, v2.2.0 is a strict superset of 2.1.0, adding the following algorithms:

sk-ssh-ed25519-cert-v01@openssh.com
sk-ecdsa-sha2-nistp256-cert-v01@openssh.com
sk-ssh-ed25519@openssh.com
sk-ecdsa-sha2-nistp256@openssh.com

So, if the Dropbear server found a match in v2.1.0, it should also be able to find a match in the v2.2.0 case. However, I wonder if perhaps the problem could be related to the length of the host key algorithm list. With the added algorithms, the length of the list is 579 bytes in v.2.2.0 vs. 437 bytes in v2.1.0. Perhaps DropBear has a length limit on that.

In the connect() call, could you try setting server_host_key_algs=['ssh-rsa'] and see if that still has the error? If that works, try using a longer list, like

server_host_key_algs=['ssh-ed25519-cert-v01@openssh.com', 'ssh-ed448-cert-v01@openssh.com', 'ecdsa-sha2-nistp521-cert-v01@openssh.com', 'ecdsa-sha2-nistp384-cert-v01@openssh.com', 'ecdsa-sha2-nistp256-cert-v01@openssh.com', 'ecdsa-sha2-1.3.132.0.10-cert-v01@openssh.com', 'ssh-rsa-cert-v01@openssh.com', 'ssh-dss-cert-v01@openssh.com', 'ssh-ed25519', 'ssh-ed448', 'ecdsa-sha2-nistp521', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-1.3.132.0.10', 'rsa-sha2-256', 'rsa-sha2-512', 'ssh-rsa', 'ssh-dss']

This is the list from v2.1.0. If that works, it might also be interesting to try:

server_host_key_algs=['ssh-rsa',' sk-ssh-ed25519-cert-v01@openssh.com', 'sk-ecdsa-sha2-nistp256-cert-v01@openssh.com', 'ssh-ed25519-cert-v01@openssh.com', 'ssh-ed448-cert-v01@openssh.com', 'ecdsa-sha2-nistp521-cert-v01@openssh.com', 'ecdsa-sha2-nistp384-cert-v01@openssh.com', 'ecdsa-sha2-nistp256-cert-v01@openssh.com', 'ecdsa-sha2-1.3.132.0.10-cert-v01@openssh.com', 'ssh-rsa-cert-v01@openssh.com', 'ssh-dss-cert-v01@openssh.com', 'sk-ssh-ed25519@openssh.com', 'sk-ecdsa-sha2-nistp256@openssh.com', 'ssh-ed25519', 'ssh-ed448', 'ecdsa-sha2-nistp521', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-1.3.132.0.10', 'rsa-sha2-256', 'rsa-sha2-512', 'ssh-dss']

This is the full list from v2.2.0, but with 'ssh-rsa' moved to the front. I don't know if this would help or not if it is a buffer length issue, but it'd be an interesting data point.

sanga

comment created time in a month

issue commentronf/asyncssh

can't connect anymore with known_hosts=None after fe757eae9fb1df5002

Can you get any logs from the Dropbear server to see if it is reporting an error? Also, can you increase the debug level to 2 and grab the logs from the client? The commands for that would be something like:

import logging
logging.basicConfig(level='DEBUG')
asyncssh.set_debug_level(2)
sanga

comment created time in a month

PR opened fingolfin/ssh-comparison

Update asyncssh.md
+6 -2

0 comment

1 changed file

pr created time in a month

push eventronf/ssh-comparison

Ron Frederick

commit sha 767f603fa43db2e06492d5f67d07f6b6f3bc620d

Update asyncssh.md

view details

push time in a month

create barnchronf/ssh-comparison

branch : asyncssh

created branch time in a month

fork ronf/ssh-comparison

Comparison of various SSH implementations in terms of supported crypto protocols

https://ssh-comparison.quendi.de/

fork in a month

PR closed ronf/asyncssh

Proposed fix for exception caused when using scp to send files that h…

…ave spaces in their names #253

This is my first time submitting a pull request to an open source library so I hope I have followed your procedure's correctly: " Before submitting a pull request, make sure to indicate that you are OK with releasing your code under this license and how you'd like to be listed in the contributors list."

This fix is working for me, if you think it's suitable for release please do.

+1 -1

1 comment

1 changed file

knkp

pr closed time in a month

pull request commentronf/asyncssh

Proposed fix for exception caused when using scp to send files that h…

This fix is in commit c992ec4 and is now available in AsyncSSH 2.2.0. Thanks for your report and proposed fix!

knkp

comment created time in a month

issue closedronf/asyncssh

open client connection as interactive shell

Hi, I'm trying to update a code written using Paramiko with AsyncSSH. I need to connect to a SSH server using an "interactive shell", so all the ENV will be set correctly. I can do this in Paramiko using the "invoke_shell()" method, but have'nt managed to find a way to do a similar thing with AsyncSSH. is there a way to do this ?

closed time in a month

omribentov

issue commentronf/asyncssh

open client connection as interactive shell

Closing due to inactivity. If you have additional questions, feel free to open a new issue.

omribentov

comment created time in a month

issue closedronf/asyncssh

Redirecting stderr to stdout in create_process?

I'm attempting to record stdout and stderr interleaved (i.e. what you would see on console) and save it into StringIO instance. I thought this was what process.redirect was for, but it doesn't seem to be working for me.

The code (approximately) is:

remotehost = 'my.remote.host'
cmd = 'cmd -i want to run'
output = io.StringIO()

async def write_output(data):
    # Write output to some websockets.
    # ...
    pass

async with asyncssh.connect(remotehost) as conn:
    async with conn.create_process(cmd) as process:
        process.redirect(stderr=process.stdout)   # This is what I thought would work
        while process.exit_status is None:
            newoutput = process.stdout.read(1024)
            await write_output(newoutput)
            output.write(newoutput)

However, when I run this, I only seem to be getting stdout. Is this not how redirect should be used?

I guess alternatively I can just redirect it in the cmd, but I was hoping to not be dependent on the shell on remotehost.

(Also, thanks for the library, it made what I'm trying to do really easy, except this one little thing.)

closed time in a month

btyoung

issue commentronf/asyncssh

Redirecting stderr to stdout in create_process?

Closing due to inactivity. If you have additional questions, feel free to open a new issue.

btyoung

comment created time in a month

issue closedronf/asyncssh

overcoming buffering

I'm trying to implement tailing ability in parallel in this fashion.

command = 'strbuf -eL -oL ' + command
async with asyncssh.connect(host) as conn:
    await conn.run(command, stdout='/tmp/prun_%s.out' % host),

But the output seems to be buffered even if I provide the -u on the shebang line or even try to unbuffer the sys.stdout. Can you glide me on this?

I don't believe this is an issue with asyncssh so is there a better forum to ask questions like this ?

Thanks

closed time in a month

pete312

issue commentronf/asyncssh

overcoming buffering

Closing due to inactivity. If you have additional questions, feel free to open a new issue.

pete312

comment created time in a month

issue closedronf/asyncssh

asyncio.run make asyncssh Connection lost strangely. bug?

import asyncio
import asyncssh
import logging
class machine:
    pass
a = machine()
a.host = '192.168.0.1'
a.password = '1'
b = machine()
b.host = '192.168.0.2'
b.password = '2'
machines = [a, b]

async def connect_with_retry(machine):
    machine.conn = await asyncssh.connect(machine.host, username='root', password=machine.password, known_hosts=None, keepalive_interval=1)

async def run_command(machine, command='ll', timeout=2):
    return 1

async def main():
    for _ in machines:
        asyncio.create_task(connect_with_retry(_))
    # await asyncio.sleep(5)
    print(await asyncio.gather(*[run_command(_, 'll')for _ in machines]))

loop = asyncio.get_event_loop()
asyncio.get_event_loop().set_debug(True)
logging.basicConfig(level=logging.DEBUG)
asyncio.run(main(), debug=True)

will always throw asyncssh.misc.ConnectionLost: Connection lost, except using run_until_completed or run_forever or uncomment asyncio.sleep()

In ./Lib/asyncio/runners.py, the run definetion is

def run(main, *, debug=False):
...
    try:
        events.set_event_loop(loop)
        loop.set_debug(debug)
        return loop.run_until_complete(main)
    finally:
        try:
            _cancel_all_tasks(loop)
            loop.run_until_complete(loop.shutdown_asyncgens())
...

def _cancel_all_tasks(loop):
    to_cancel = tasks.all_tasks(loop)
    if not to_cancel:
        return

    for task in to_cancel:
        task.cancel()

    loop.run_until_complete(
        tasks.gather(*to_cancel, loop=loop, return_exceptions=True))

it does cancel while asyncssh connecting. let's assume asyncio is right, then is it a bug since it's strange and taking time to find out what's going on?

closed time in a month

NewUserHa

issue commentronf/asyncssh

asyncio.run make asyncssh Connection lost strangely. bug?

Closing due to inactivity. If you have additional questions, feel free to open a new issue.

NewUserHa

comment created time in a month

issue closedronf/asyncssh

sporadic endpoint failures.

I have an issue when doing relativity large amount of calls to machines. basically following this pattern https://asyncssh.readthedocs.io/en/latest/#running-multiple-clients

async def run_client(host, command, creds, callable=None):
    try:
        async with asyncssh.connect(host, username=creds['username'], password=creds['password'], known_hosts=None) as conn:
            result = await conn.run(command)
            if callable:
                callable(result)
            
            return result
    except Exception as e:
        return print(e)

One difference to node is known_hosts = None I get this error sporadically. Do you know what is causing this ?

2019-04-17 08:03:14,451 base_events.default_exception_handler 1260 ERROR: Fatal write error on socket transport
protocol: <asyncio.streams.StreamReaderProtocol object at 0x7f9a56b17f28>
transport: <_SelectorSocketTransport fd=90 read=polling write=<idle, bufsize=0>>
Traceback (most recent call last):
  File "/opt/rh/rh-python36/root/usr/lib64/python3.6/asyncio/selector_events.py", line 762, in write
    n = self._sock.send(data)
OSError: [Errno 107] Transport endpoint is not connected

closed time in a month

pete312

issue commentronf/asyncssh

sporadic endpoint failures.

Closing due to inactivity. Feel free to open this again if you are still seeing the problem and have any more information to report.

pete312

comment created time in a month

issue closedronf/asyncssh

Register generic keyevents

I would like to propose a way to register keyevents which will result in an exception when pressed, similar to the BreakReceived but with information about the key being pressed.

In my use case I'm writing an interactive SSH application where the user input will manipulate windows and menus. For example, when a user presses the "tab" key the window should be switched or the cursor moved to a different input box on the screen.

With this issue I would like to discuss whether this would be useful in the project. If considered useful, I can polish up my changes into a proper pull request.

Maybe this helps with some clarification:

# Locally I would now have the register_keyevent function available on the channel:
process.channel.register_keyevent("\t")

# Then in my main loop I would do:
except asyncssh.KeyeventReceived as kevt:
    key_pressed = kevt.keyevent
    # do things with the event

closed time in a month

unboiled

issue commentronf/asyncssh

Register generic keyevents

Closing due to inactivity. If you're still interested in this, please re-open it or create a new issue.

unboiled

comment created time in a month

issue closedronf/asyncssh

SSHConnection should subclass asyncio.Protocol

asyncssh.connection.SSHConnection implements the methods of asyncio.Protocol but doesn't inherit from it. This makes type checkers complain.

Actual behavior

Consider the following sample code:

import asyncio
import asyncssh

def custom_connect_logic(protocol: asyncio.Protocol):
    """Logic in the networking layer of my application"""

conn = asyncssh.SSHClientConnection(
    host='127.0.0.1', port=22,
    loop=asyncio.get_event_loop(),
    options=asyncssh.SSHClientConnectionOptions(),
)

custom_connect_logic(conn)

conn on the last line is highlighted yellow with a warning in PyCharm.

Expected behavior

It would be nice to have this warning go away.

Environment

requirements.txt

asyncssh==2.1.0

python --version

Python 3.8.0

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu Focal Fossa (development branch)
Release:        20.04
Codename:       focal

closed time in a month

danielzgtg

issue commentronf/asyncssh

SSHConnection should subclass asyncio.Protocol

This change is now available in AsyncSSH 2.2.0.

danielzgtg

comment created time in a month

issue closedronf/asyncssh

Endless loop on non-SSH server

asyncssh doesn't close the connection when connecting to a server that isn't SSH. For example, connecting to a server that outputs /dev/urandom and blackholes input will cause asyncssh to get stuck. This causes a denial-of-service on the client.

Server example

ncat -l 9999 --send-only --exec "/bin/cat /dev/urandom"

Procedure

import asyncio, asyncssh
asyncio.run(asyncssh.get_server_host_key('127.0.0.1', 9999)

Actual behavior

The asyncio.run line blocks forever when pasted into REPL. The Python process stays at 100% CPU.

Expected behavior

The connection is closed after at most a few seconds. An exception is then thrown.

OpenSSH behaves like this:

$ ssh -p 9999 127.0.0.1
kex_exchange_identification: banner line contains invalid characters

Environment

requirements.txt

asyncssh==2.1.0

python --version

Python 3.8.0

lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu Focal Fossa (development branch)
Release:        20.04
Codename:       focal

closed time in a month

danielzgtg

issue commentronf/asyncssh

Endless loop on non-SSH server

This change is now available in AsyncSSH 2.2.0.

danielzgtg

comment created time in a month

issue closedronf/asyncssh

ConnectionResetError: [Errno 104] Connection reset by peer

I get the above error when running the following code:

await asyncio.wait_for(
    asyncssh.connect(
        hostname,
        password=...,
        username=...,
        known_hosts=None,
        client_keys=[....],
    ),
    timeout=5,
)

The traceback shows that the problem originated with peername being None

Exception in callback SSHConnection.connection_made(<_SelectorSoc...e, bufsize=0>>)
handle: <Handle SSHConnection.connection_made(<_SelectorSoc...e, bufsize=0>>)>
Traceback (most recent call last):
  File "/usr/lib/python3.8/asyncio/events.py", line 81, in _run
    self._context.run(self._callback, *self._args)
  File ".../lib/python3.8/site-packages/asyncssh/connection.py", line 692, in connection_made
    self._peer_addr, self._peer_port = peername[:2]
TypeError: 'NoneType' object is not subscriptable

I found this issue where you explained that peername can be None and that it's best to check if it's None before attempting to split it. So is it as a simple as checking if peername is None and if it is skipping over that line? If so, I'd be happy to make a PR with a fix.

closed time in a month

d1618033

issue commentronf/asyncssh

ConnectionResetError: [Errno 104] Connection reset by peer

This change is now available in AsyncSSH 2.2.0.

d1618033

comment created time in a month

issue closedronf/asyncssh

Asyncssh.scp seem's to have trouble with file names that have space's in them.

First I gotta say, as I've been digging into this library trying to figure out what's wrong I've been more and more impressed with the design behind it. Learned a lot just reading the code. Thank you for all your hard work.

OK, I wouldn't be surprised if it's just something I'm doing wrong or maybe I've wandered into a taboo. But I cannot get scp to push files to a server if they have space's in the names. Trying to do this under Windows 10 and I'm working with asyncssh==1.18.0

I've tried several methods to trace what's going on and it always come's back to asyncssh is aware of the file all the way up to the point that it tries that actual transfer and it fails with: bytearray(b'\x01scp: Invalid copy or dir request\n') I took that from a print statement I added located in

SSHChannel:
 ...
def _flush_send_buf(self)

I have tried some of the wildcard patterns, replacing the space in the filename with * and ?. I also replaced the \\ with / because I found out that the matching requires this in order for it to work it's way through a path.

I see that eventually, it does match to the file, and then it replaces the special character's with the space again and tries to send. Which then fails.

I'm sending it this way:

async with asyncssh.connect(self.SERVER_HOST, self.PORT, client_keys=self.CLIENT_KEY) as conn:
    await asyncssh.scp('c:/somepath/example with spaces.txt',(conn, b'.'))

This works for me under test with file names that do not have space's. But for the life of me I can't get it to work with them.

Thanks for any insight.

closed time in a month

knkp

issue commentronf/asyncssh

Asyncssh.scp seem's to have trouble with file names that have space's in them.

This fix is now available in AsyncSSH 2.2.0!

knkp

comment created time in a month

issue closedronf/asyncssh

Removed files in wheel

I noticed that the wheel file of asyncssh 2.1.0 on pypi contains the following removed directories/files.

asyncssh-2 1 0-py3-none-any whl 2020-01-15 15-26-29

So that users who installed asyncssh via wheel see quite confusing message when they accidentally tried to import asyncssh.crypto.pyca or whatever.

Python 3.7.6 (default, Dec 25 2019, 14:48:36)
[Clang 11.0.0 (clang-1100.0.33.8)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncssh.crypto.pyca
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/alisue/.anyenv/envs/pyenv/versions/3.7.6/lib/python3.7/site-packages/asyncssh/crypto/pyca/__init__.py", line 15, in <module>
    from . import cipher
  File "/Users/alisue/.anyenv/envs/pyenv/versions/3.7.6/lib/python3.7/site-packages/asyncssh/crypto/pyca/cipher.py", line 144, in <module>
    CipherFactory(_cipher, _mode))
TypeError: register_cipher() missing 1 required positional argument: 'block_size'
>>>

closed time in a month

lambdalisue

issue commentronf/asyncssh

Removed files in wheel

This problem should now be corrected with the newly released AsyncSSH 2.2.0 wheel. Thanks for the report!

lambdalisue

comment created time in a month

issue closedronf/asyncssh

U2F support

Hello,

Recently native support for authentication with U2F keys has landed in OpenSSH. At this moment it has experimental status, but I don't expect any problems to appear because U2F is pretty mature, sound and already has wide adoption.

I wonder if you consider to implement this feature as well. At least server side support should be pretty much like authentication with ECDSA keys and from my understanding it'll not require to introduce additional dependencies.

closed time in a month

Snawoot

issue commentronf/asyncssh

U2F support

This support is now released in AsyncSSH 2.2.0!

Snawoot

comment created time in a month

push eventronf/asyncssh

Ron Frederick

commit sha 8359429e99def7c7607986666bb95de5ffb151f8

Bump version number up to 2.2.0 and update change log

view details

push time in a month

created tagronf/asyncssh

tagv2.2.0

AsyncSSH is a Python package which provides an asynchronous client and server implementation of the SSHv2 protocol on top of the Python asyncio framework. It requires Python 3.4 or later and the Python cryptography library for some cryptographic functions.

created time in a month

push eventronf/asyncssh

Ron Frederick

commit sha fe757eae9fb1df50023d79e877e54953d29e1369

Check in first cut at support for U2F/FIDO2 security keys This commit adds support for OpenSSH-compatible U2F and FIDO2 security keys in AsyncSSH. It has been tested against openssh-portable HEAD, but since that is not part of an official release, this implementation may need changes to remain compatible with whatever official OpenSSH release this feature ships in. In the meantime, please enjoy an early preview of this functionality. Features in this commit include: * ECDSA and Ed25519 signature algorithms * Generation of new keys, with an option to require touch or not * Generation of certificates, both as a key being signed and a CA key * Read and write of OpenSSH-compatible key and certificate file formats * Access to and management of keys loaded in an OpenSSH ssh-agent * Compatibility with OpenSSH "middleware" provider API (libsk-libfido2) In addition to supporting these key types for user keys and certificates, this implementation also supports them for host keys and certificates. However, since OpenSSH doesn't have this support yet, AsyncSSH must be both the client and server to take advantage of this (at least for now).

view details

Ron Frederick

commit sha de64a07da41527fcbe6519a9ea53dcf18dd318d1

Allow specifying a certificate to use with an existing keypair This commit allows you to update the SSH certificate used with an existing keypair. This can be used if you want to associate a certificate with a keypair tied to an ssh-agent key which wasn't imported with the certificate you want to use. It can also be used to associate a certificate with a key on a smart card, which doesn't currently allow a certificate to be imported in ssh-agent.

view details

Ron Frederick

commit sha 29bdda514c6649c842d0d7c2ec6bba0745766d94

Allow load_keypairs() to read public key files This commit changes the load_keypairs() function to support reading public key files in addition to certificates. This can be triggered by passing in a tuple of (priv_key, pub_key), or the public key can be found automatically when the private key is read from a file, by appending ".pub" to the private key filename. Right now, the only data extracted from the public key file is the comment, and it will be set as the comment on the resulting keypair object if there's no comment set in either the private key file or certificate. This can be useful when the private key is written out in PKCS1 or PKCS8 format which don't support comments, but the public key is written out in OpenSSH or RFC4716 format. Checking is done to make sure the public and private key files refer to the same key before it is used. If not present or not readable, the public key file is ignored.

view details

Ron Frederick

commit sha 7b73a4eab5956743c0e713ee7d9ae60a77986c5b

Fix typo in docstring Thanks go to Mikhail Terekhov for catching this!

view details

Ron Frederick

commit sha d771d649ffe80c51eaf4afd2e58213782844903d

Update pywin32 dependency This commit changes the "extras" dependency in AsyncSSH from "pypiwin32" to "pywin32", as the former package is no longer being maintained.

view details

Ron Frederick

commit sha c40feba485308ba307ef8b3f80148e2613d3bfb6

Work around false reports in latest version of coverage tool After upgrading the version of coverage here, I noticed it was reporting missing partial branch coverage on for loops that did not seem correct. This commit is a bit of a hack to work around those false reports.

view details

Ron Frederick

commit sha c992ec4dd0e05997692763fbce2b690452a93608

Fix an issue in the SCP server related to handling files with spaces This commit fixes a problem handling filenames with spaces in the SCP server code. Thanks go out to Stephen Copeland for reporting the issue and suggesting a fix! This also includes a fix for an issue related to handling of filenames with colons in them passed as byte strings. This issue was previously fixed when passing in regular strings, but the byte string case was missed.

view details

Ron Frederick

commit sha 4b9025fb6dc949b41bc13fa1f238b4f6567b1dc3

Fix an error in a comment

view details

Ron Frederick

commit sha 8777c50abe6bab0896cfbbac85b49a018b1adaed

Update U2F/FIDO2 security key support to use Python fido2 module This commit updates the AsyncSSH U2F/FIDO2 security key support to be based on the Python fido2 module, rather than the "libsk-libfido2" middleware library which was originally used by OpenSSH. That library was folded into the OpenSSH source code and is no longer available to install separately. One benefit to this is that the necessary FIDO2 library support can now be installed with 'pip', and AsyncSSH now has a 'fido2' optional extra which will install FIDO2 support when AsyncSSH is installed. This new support should continue to provide support for both U2F (CTAP version 1) and FIDO2 (CTAP version 2) keys, and for both the ECDSA and Ed25519 signature algorithms. Both user and host keys are supported, as are OpenSSH certificates based on these keys. As before, support is also available for using and managing keys via an OpenSSH 8.2 or later ssh-agent. Interoperability testing has been done against OpenSSH 8.2, where this support has now been released. However, AsyncSSH doesn't yet support the "resident key" feature found in OpenSSH 8.2 -- this should be coming in a future commit.

view details

Ron Frederick

commit sha 1cf2a8479976322d649caeb56e0bee34d5eb4e05

Fix an issue with resuming reading when readuntil() returns incomplete read This commit fixes a problem where reading is not resumed when readuntil() fails to find a separator match and ends up raising IncompleteReadError with a partial result. With this change, readuntil() can be called again after the exception to continue reading data, until EOF or a signal is received. Note: Multi-character separators could be missed if they happen to be split across the partial results which are returned.

view details

Ron Frederick

commit sha d3860e61006f1e8f5e7e8b11a65f89ed559a4f98

Protect client against connecting to non-compliant SSH servers This commit adds support for a login_timeout client option, to limit the amount of time the AsyncSSH client will wait for the initial SSH handshake and authentication to complete. It defaults to two minutes, just like the existing server-side login_timeout option. In addition, this commit adds limits on how long banner lines sent prior to the server version string can be, and on the number of such banner lines. This will allow the client to abort more quickly if it gets a large amount of output that doesn't match what it is expecting to get from an SSH server.

view details

Ron Frederick

commit sha 303ae6bd55599df97bd0b713321c6cc8f114976c

Correct module doc comment

view details

Ron Frederick

commit sha 3cafac0290db2cf5a25ddfdcf2443c79cb66ad10

Enhance FIDO2 security key support This commit enhances the AsyncSSH FIDO2 support in a number of ways: * A PIN can now be provided (if needed) when managing security keys * Resident keys can now be created and loaded * User and application names can now be set when generating keys * Support has been added for "no-touch-required" in authorized_keys * Support has been added for "no-touch-required" in OpenSSH certificates * Unit tests have been expanded to cover certificates based on security keys, use of security keys as host keys, enforcement of no-touch-required, and a much greater number of error conditions.

view details

Ron Frederick

commit sha cd7248ad5f25bcf1c73bd3849f33a88ea8b7e07a

Change SSHConnection to inherit from asyncio.Protocol for type checkers This commit makes SSHConnection a subclass of asyncio.Protocol, to make type checkers happy. Note: The signature on data_received() in SSHConnection is slightly different than the one in asyncio.Protocol. It takes an extra optional argument of datatype, to support tunneled SSH connections. Due to this change, it was necessary to disable the arguments-differ warning from pylint.

view details

Ron Frederick

commit sha a7e5bfb5f5fe333e69a4d5f4dcfe787894d990c6

Protect against sockname/peername of a transport being None This commit adds defensive code which makes sure that sockname and peername of an asyncio transport doesn't report back None but call the connection_made() method anyway. It looks like this can happen in some cases when a connection dies immediately after starting up, before this information can be collected. My thanks go out to David S who identified this issue and provided a proposed fix.

view details

Ron Frederick

commit sha 8359429e99def7c7607986666bb95de5ffb151f8

Bump version number up to 2.2.0 and update change log

view details

push time in a month

PR closed ronf/asyncssh

Issue-259: Check that peername is not None

fixes #259

+2 -1

1 comment

1 changed file

d1618033

pr closed time in a month

pull request commentronf/asyncssh

Issue-259: Check that peername is not None

Thanks - a variation of this fix is available in commit a7e5bfb.

d1618033

comment created time in a month

issue commentronf/asyncssh

ConnectionResetError: [Errno 104] Connection reset by peer

The defensive code to handle sockname/peername being None is now commit a7e5bfb in the "develop" branch. Thanks for identifying this issue and the related code in asyncio where it can happen, as well as your proposed fix! This'll go out in a new release shortly.

d1618033

comment created time in a month

issue commentronf/asyncssh

SSHConnection should subclass asyncio.Protocol

Ok - this change is commit cd7248a in the "develop" branch.

When I actually made the change, it reminded me why I didn't do it before. The signature of data_received() in SSHConnection is actually different from the one in asyncio.Protocol. It takes an optional "datatype" argument, to allow tunneled SSH connections. Due to this extra argument, the pylint arguments-differ warning had be suppressed on SSHConnection.data_received(). However, to make type checkers happy and avoid the pylint warning, I've gone ahead with that.

danielzgtg

comment created time in a month

push eventronf/asyncssh

Ron Frederick

commit sha cd7248ad5f25bcf1c73bd3849f33a88ea8b7e07a

Change SSHConnection to inherit from asyncio.Protocol for type checkers This commit makes SSHConnection a subclass of asyncio.Protocol, to make type checkers happy. Note: The signature on data_received() in SSHConnection is slightly different than the one in asyncio.Protocol. It takes an extra optional argument of datatype, to support tunneled SSH connections. Due to this change, it was necessary to disable the arguments-differ warning from pylint.

view details

Ron Frederick

commit sha a7e5bfb5f5fe333e69a4d5f4dcfe787894d990c6

Protect against sockname/peername of a transport being None This commit adds defensive code which makes sure that sockname and peername of an asyncio transport doesn't report back None but call the connection_made() method anyway. It looks like this can happen in some cases when a connection dies immediately after starting up, before this information can be collected. My thanks go out to David S who identified this issue and provided a proposed fix.

view details

push time in a month

issue commentronf/asyncssh

U2F support

Enhanced support for FIDO2 keys is now available in commit 3cafac0 in the "develop" branch. Changes include:

  • A PIN can now be provided (if needed) when managing security keys
  • Resident keys can now be created and loaded
  • User and application names can now be set when generating keys
  • Support has been added for "no-touch-required" in authorized_keys
  • Support has been added for "no-touch-required" in OpenSSH certificates
  • Unit tests have been expanded to cover certificates based on security keys, use of security keys as host keys, enforcement of no-touch-required, and a much greater number of error conditions.

If all goes well, I'm hoping to have an official release with this support out later today.

Snawoot

comment created time in a month

push eventronf/asyncssh

Ron Frederick

commit sha 303ae6bd55599df97bd0b713321c6cc8f114976c

Correct module doc comment

view details

Ron Frederick

commit sha 3cafac0290db2cf5a25ddfdcf2443c79cb66ad10

Enhance FIDO2 security key support This commit enhances the AsyncSSH FIDO2 support in a number of ways: * A PIN can now be provided (if needed) when managing security keys * Resident keys can now be created and loaded * User and application names can now be set when generating keys * Support has been added for "no-touch-required" in authorized_keys * Support has been added for "no-touch-required" in OpenSSH certificates * Unit tests have been expanded to cover certificates based on security keys, use of security keys as host keys, enforcement of no-touch-required, and a much greater number of error conditions.

view details

push time in a month

issue commentronf/asyncssh

SSHConnection should subclass asyncio.Protocol

Thanks for the suggestion. I did a quick test and it looks like making SSHConnection explicitly inherit from asyncio.Protocol doesn't cause any problems and I agree that it could be helpful for type checkers. I'm in the middle of some other changes right now, but I'll get this change into the "develop" branch as soon as the other work is complete and it'll be rolled into the next release.

danielzgtg

comment created time in a month

issue commentronf/asyncssh

ConnectionResetError: [Errno 104] Connection reset by peer

I'll probably bring this change into the "develop" branch first. I appreciate the pull request, but I'll probably commit something separate there along the same lines once I'm done with some other changes.

I'm thinking this protection should probably be added for both the sockname and the peername, and I'll either need to add in some unit tests or add a #pragma: no branch to make coverage happy on the case where either of those actually is None.

d1618033

comment created time in a month

issue commentronf/asyncssh

ConnectionResetError: [Errno 104] Connection reset by peer

I think that particular "Bad protocol version identification" happens any time the socket is closed without sending the initial SSH version string. So, if the socket closes for some reason before the application can write to it (which would be consistent with the getpeername() failing), you'd see that error.

The fact that they are catching and ignoring the exception when they set the 'peername' value to None could explain why connection_made() is still being called. The code snippet you posted here is definitely a good reason to add the defensive code which checks the value before unpacking it.

As for how this could happen, I'm still thinking it might be related to the TCP connection failing for some reason after it connects but before the _SelectorTransport() is initialized. I don't know what would cause that, and it seems like it would be a pretty small window of time, but if the connection was already broken before the call to sock.getpeername(), I think we'd see what you have described here. You might want to add a debugging statement to print out the details of the socket.error exception in the code you included here, though I suspect that will just confirm that it is a "Connection reset by peer" error and not explain why that's happening.

You might also want to do a packet capture at the TCP level to see if that shows anything interesting. At the very least, that'll let you see which side the connection reset is coming from.

d1618033

comment created time in a month

issue commentronf/asyncssh

Endless loop on non-SSH server

Thanks for the confirmation! This commit will be included in the next AsyncSSH release.

danielzgtg

comment created time in a month

issue commentronf/asyncssh

Endless loop on non-SSH server

Ok - I've added functionality to help with this in commit d3860e6 in the "develop" branch. It consists of limits on the length of the SSH version line and banner lines before it and on the number of allowed banner lines before the version. So, any server generating a large amount of output that doesn't match an SSH version line will trigger these limits and cause the client to close the connection.

In addition, to deal with servers that don't send a version string or take a long time to do so, I've extended the existing server-side "login_timeout" parameter to also be available on the client side. This timeout actually covers not only the version exchange but also the SSH key exchange and authentications steps. If all of those don't complete within the configured login timeout, the client will drop the connection. This value defaults to two minutes just like on the server side, but it can be adjusted to whatever you like. It can also be disabled by setting it to 0.

danielzgtg

comment created time in a month

push eventronf/asyncssh

Ron Frederick

commit sha d3860e61006f1e8f5e7e8b11a65f89ed559a4f98

Protect client against connecting to non-compliant SSH servers This commit adds support for a login_timeout client option, to limit the amount of time the AsyncSSH client will wait for the initial SSH handshake and authentication to complete. It defaults to two minutes, just like the existing server-side login_timeout option. In addition, this commit adds limits on how long banner lines sent prior to the server version string can be, and on the number of such banner lines. This will allow the client to abort more quickly if it gets a large amount of output that doesn't match what it is expecting to get from an SSH server.

view details

push time in a month

issue commentronf/asyncssh

ConnectionResetError: [Errno 104] Connection reset by peer

It's interesting that you're getting an 'Connection reset by peer' error here, rather than a 'Connection refused' or timeout. That implies the remote system did accept the connection, but then ended up resetting it after that, but quickly enough that perhaps the client didn't have time to determine the peer information from the connection before it completed the TCP handshake.

I guess there shouldn't be any harm in coding defensively for this, as you proposed. I'd just like to understand a bit better how this is happening, in case other changes might be needed as well.

d1618033

comment created time in a month

issue commentronf/asyncssh

Endless loop on non-SSH server

Thanks for the report! It's true that AsyncSSH right now will wait forever by default for the "version" line from the server. It also doesn't put an upper bound on the amount of data it will read before a newline. So, if that newline never comes, it could buffer large amounts of data, eating up RAM. Checking RFC 4253, the maximum allowed length of the version string is 255 characters, but I'm not currently enforcing that.

As a mitigation for now, you can add your own timeout around the asyncssh.connect() call using asyncio.wait_for(). If the remote system is feeding large amounts of data, it may still use 100% CPU and some memory for however long a timeout you set, but at least you'll eventually break out and can clean things up.

I'll look into putting some default limits for this.

danielzgtg

comment created time in 2 months

issue commentronf/asyncssh

ConnectionResetError: [Errno 104] Connection reset by peer

In a quick test here, I was not able to reproduce this. I got the expected TimeoutError:

INFO:asyncssh:Opening SSH connection to 1.1.1.1, port 22
Traceback (most recent call last):
  File "./simple_client.py", line 49, in <module>
    loop.run_until_complete(asyncio.wait_for(run_client(), timeout=1))
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 612, in run_until_complete
    return future.result()
  File "/opt/local/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/tasks.py", line 490, in wait_for
    raise exceptions.TimeoutError()
asyncio.exceptions.TimeoutError

I get the same result when wrapping the asyncssh.connect() call directly with asyncio.wait_for(), as you did in your example here.

d1618033

comment created time in 2 months

issue commentronf/asyncssh

ConnectionResetError: [Errno 104] Connection reset by peer

I think there's something more complicated going on here. If the timeout actually fired and triggered an exception before the socket successfully connected, I would not have expected the connection_made callback to be called. That's why the assignment there is unconditional -- it should only be called when a connection is successfully opened, and in that case, peername should have been set to something.

I'll see if I can reproduce this and dig a bit more this weekend into how peername is becoming None. Thanks for the report!

d1618033

comment created time in 2 months

issue commentronf/asyncssh

Record session for testing

Trying to use readuntil() to find the end of command output can be pretty unreliable. If the server supports it, you'd probably be better off running each command in a separate SSH channel, so it's clear when the command is done generating output. You could still potentially do the replay all in a single channel where you pretended it was an interactive shell session, but it's not clear how you'd handle commands which require input from the client before they finished.

Basing the replay on a manually generated configuration file like you suggest here seems like it would be a lot simpler, and something like that could easily be written as an application on top of AsyncSSH. That could easily support clients which connected either interval shell or exec style sessions where each command is run on a separate channel. You'd still need to have some way to represent when input was needed, though, and how much input to read. If you wanted to support something like choosing a response at random, you'd also need some syntax in the file for how you tell what the boundaries are between the different random responses.

Syniex

comment created time in 2 months

issue commentronf/asyncssh

Keyboard-interactive auth

SSH servers typically have a limit on how many authentication attempts they will allow before closing the connection on you. In this case, it looks like you attempted authentication with 6 different public keys, all of which failed. So, the client never got a chance to try the password you had set.

If you don't want to attempt to authenticate with any of your default SSH keys, you can set "client_keys=None" in the AsyncSSH client connection options. That should disable public key authentication and let you move more quickly over to the password-based auth.

kryptek

comment created time in 2 months

push eventronf/asyncssh

Ron Frederick

commit sha 1cf2a8479976322d649caeb56e0bee34d5eb4e05

Fix an issue with resuming reading when readuntil() returns incomplete read This commit fixes a problem where reading is not resumed when readuntil() fails to find a separator match and ends up raising IncompleteReadError with a partial result. With this change, readuntil() can be called again after the exception to continue reading data, until EOF or a signal is received. Note: Multi-character separators could be missed if they happen to be split across the partial results which are returned.

view details

push time in 2 months

issue commentronf/asyncssh

Record session for testing

Are you thinking of something like "tcpreplay", but for the unencrypted version of an SSH stream, where new keys are negotiated on the replayed session but then the data inside is replayed encrypted with the new keys. It seems more like an application one would write on top of AsyncSSH rather than a core feature of AsyncSSH itself, though. I can imagine this getting pretty complex if you wanted to be able to "edit" the data in the SSH stream to account for application-specific differences that you'd have to account for in order for the replayed data to be properly accepted by whatever target you point at. I also expect you might need some way to know when to read data from the server to prevent flow control from eventually breaking things, and that you might even need to extract data from the returned data stream to influence the edits you make in the replayed stream.

Syniex

comment created time in 2 months

issue closedronf/asyncssh

Connection Lost at running command in Alcatel SROS

Hello,

I have the following code:

import asyncio
import asyncssh
import logging


async def main():
    async with asyncssh.connect(
        "198.18.105.55", username="admin", password="admin", known_hosts=None
    ) as conn:
        await conn.run("environment no more", check=True)


logging.basicConfig(level=logging.DEBUG)
asyncio.run(main(), debug=True)

The result:

DEBUG:asyncio:Using selector: KqueueSelector
INFO:asyncssh:Opening SSH connection to 198.18.105.55, port 22
DEBUG:asyncio:connect <socket.socket fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('0.0.0.0', 0)> to ('198.18.105.55', 22)
DEBUG:asyncio:poll took 11.801 ms: 1 events
INFO:asyncssh:[conn=0] Connection to 198.18.105.55, port 22 succeeded
INFO:asyncssh:[conn=0]   Local address: 198.18.252.1, port 60002
DEBUG:asyncio:<socket.socket fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('198.18.252.1', 60002), raddr=('198.18.105.55', 22)> connected to 198.18.105.55:22: (<_SelectorSocketTransport fd=7 read=polling write=<idle, bufsize=0>>, <asyncssh.connection.SSHClientConnection object at 0x10b5ef510>)
DEBUG:asyncio:poll took 9.363 ms: 1 events
DEBUG:asyncssh:[conn=0] Requesting key exchange
DEBUG:asyncio:poll took 9.760 ms: 1 events
DEBUG:asyncssh:[conn=0] Received key exchange request
DEBUG:asyncssh:[conn=0] Beginning key exchange
DEBUG:asyncio:poll took 11.407 ms: 1 events
DEBUG:asyncio:poll took 17.857 ms: 1 events
DEBUG:asyncssh:[conn=0] Completed key exchange
DEBUG:asyncio:poll took 10.755 ms: 1 events
INFO:asyncssh:[conn=0] Beginning auth for user admin
DEBUG:asyncio:poll took 11.088 ms: 1 events
DEBUG:asyncssh:[conn=0] Received authentication banner
DEBUG:asyncio:poll took 0.011 ms: 1 events
DEBUG:asyncssh:[conn=0] Trying public key auth with ssh-rsa key
DEBUG:asyncio:poll took 11.027 ms: 1 events
DEBUG:asyncssh:[conn=0] Trying password auth
DEBUG:asyncio:poll took 134.869 ms: 1 events
INFO:asyncssh:[conn=0] Auth for user admin succeeded
DEBUG:asyncssh:[conn=0, chan=0] Set write buffer limits: low-water=16384, high-water=65536
INFO:asyncssh:[conn=0, chan=0] Requesting new SSH session
DEBUG:asyncio:poll took 9.418 ms: 1 events
INFO:asyncssh:[conn=0, chan=0]   Command: environment no more
DEBUG:asyncio:poll took 9.554 ms: 1 events
DEBUG:asyncio:<_SelectorSocketTransport fd=7 read=polling write=<idle, bufsize=0>> received EOF
INFO:asyncssh:[conn=0] Connection lost
INFO:asyncssh:[conn=0, chan=0] Closing channel due to connection close
INFO:asyncssh:[conn=0, chan=0] Channel closed: Connection lost
INFO:asyncssh:[conn=0] Closing connection
INFO:asyncssh:[conn=0] Sending disconnect: Disconnected by application (11)
DEBUG:asyncio:Close <_UnixSelectorEventLoop running=False closed=False debug=True>
Traceback (most recent call last):
  File "/Users/dracoboros/Git/project/test.py", line 14, in <module>
    asyncio.run(main(), debug=True)
  File "/Users/dracoboros/.pyenv/versions/3.7.4/lib/python3.7/asyncio/runners.py", line 43, in run
    return loop.run_until_complete(main)
  File "/Users/dracoboros/.pyenv/versions/3.7.4/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
    return future.result()
  File "/Users/dracoboros/Git/project/test.py", line 10, in main
    await conn.run("environment no more", check=True)
  File "/Users/dracoboros/.pyenv/versions/project/lib/python3.7/site-packages/asyncssh/connection.py", line 3103, in run
    process = await self.create_process(*args, **kwargs)
  File "/Users/dracoboros/.pyenv/versions/project/lib/python3.7/site-packages/asyncssh/connection.py", line 3009, in create_process
    *args, **kwargs)
  File "/Users/dracoboros/.pyenv/versions/project/lib/python3.7/site-packages/asyncssh/connection.py", line 2927, in create_session
    bool(self._agent_forward_path))
  File "/Users/dracoboros/.pyenv/versions/project/lib/python3.7/site-packages/asyncssh/channel.py", line 1093, in create
    result = await self._make_request(b'exec', String(command))
  File "/Users/dracoboros/.pyenv/versions/project/lib/python3.7/site-packages/asyncssh/channel.py", line 660, in _make_request
    return await waiter
asyncssh.misc.ConnectionLost: Connection lost

Every time the run function is executed, the connection is lost.

closed time in 2 months

dracoboros

issue commentronf/asyncssh

Connection Lost at running command in Alcatel SROS

Sounds good - glad you figured it out. Feel free to open a new issue if you run into other problems!

dracoboros

comment created time in 2 months

issue commentronf/asyncssh

Connection Lost at running command in Alcatel SROS

If you read the banner as you show here, does it then run the command you sent, or does it still fail whenever you provide a command? I've seen some devices not support sending a command at all, allowing only "shell" style access. You may be running into that here. If that's the case, you should be able to send the command by writing it to stdin (with a newline at the end), possibly after reading the initial banner you mentioned.

dracoboros

comment created time in 2 months

issue closedronf/asyncssh

Asyncssh.scp folder names with spaces where asyncssh is acting a client and server.

Ran into another snag. Trying to run asyncssh.scp with a path where a folder has spaces in it is failing with: SCP Operation failed: Connection lost

I am using the commit https://github.com/ronf/asyncssh/commit/c992ec4dd0e05997692763fbce2b690452a93608 that fixed the previous issue #253

This is also using asyncssh as both client and server.

I can't say I've found a solution to this one yet but I can tell you where in the code it is diverging: location: asyncssh.scp.py._SCPSource.run()

code:


    async def run(self, srcpath):
        """Start SCP transfer"""

        try:
            if isinstance(srcpath, str):
                srcpath = srcpath.encode('utf-8')

--here--> exc = await self.await_response()

            if exc:
                raise exc

            for path in await match_glob(self._fs, srcpath):
                await self._send_files(path, b'')
        except (OSError, SFTPError) as exc:
            self.handle_error(exc)
        finally:
            await self.close()

If the there are no folders with spaces in their names I can easily see that it makes it to the _send_files method. Otherwise, the Connection Lost exception is called immediately.

I'll keep toying with this one to see if I can figure out more.

closed time in 2 months

knkp

issue commentronf/asyncssh

Asyncssh.scp folder names with spaces where asyncssh is acting a client and server.

Yeah - I think in this case it's probably best to leave AsyncSSH as-is, requiring the escaping of spaces on remote file paths just as the standard 'scp' does. I'll go ahead and close this. Thanks for confirming that you were able to make it work!

knkp

comment created time in 2 months

issue commentronf/asyncssh

Connection Lost at running command in Alcatel SROS

This generally means the remote system decided to close the SSH connection on you. Do you have access to the remote system's logs? It looks like you succeeded in completing the SSH handshake and in logging in, but the connect was closed at the TCP level right after sending the command that you are attempting to run.

dracoboros

comment created time in 2 months

more