If you are wondering where the data of this site comes from, please visit GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Kyle Parisi kyleparisi Newtown, PA working on

kyleparisi/assistant_cljs 2

Simple, extensible and powerful one stop personal assistant

kyleparisi/alpaca-sp500 1

Inspired by algo on using with custom nodes

kyleparisi/accounting 0

Simple Accounting App for everyone

kyleparisi/AltoRouter 0

PHP5.3+ Routing Class. Lightweight yet extremely flexible. Supports REST, dynamic and reversed routing.

kyleparisi/andesite 0

💾 Easily manage access to your open directory through OAuth2

kyleparisi/ApnsPHP 0

ApnsPHP: Apple Push Notification & Feedback Provider

kyleparisi/aports 0

Mirror of aports repository

kyleparisi/Articles-and-Tutorials 0

Because documentation often gets out-dated, quickly, this repo exists to allow the DigitalOcean Community to help keep it up to date!

issue commentcelery/celery

celery raise error: [Errno 104] Connection reset by peer after started

Since this issue is still actively referenced I'd like to shed some explicit details here. I'm using redis as my backend. As stated above, celery is not the issue. I wouldn't advise setting output buffers to 0 either. You can find a very good description for this limitation in redis docs here.

To investigate whether output buffer limits are your issue, run client list on the redis server reference. Trim all the clients where omem=0. The remaining clients will have output buffers > 0. See if any have sub > 0 or psub > 0 and exceed your servers soft/hard limits.

We have a single service that triggers .delay tasks. Delay returns an AsyncResult. This means celery has to subscribe to redis for events related to the task. As referenced above, the pub/sub clients have limits on output buffers. If you only have a few workers, it could take some time for the subscriptions to call back. In our particular case, we just need "fire and forget". So we are making use of the ignore_result feature (docs). If you need the results you'll need to balance your connections with the amount of workers doing the processing.


comment created time in a month


started time in 2 months


started time in 2 months


started time in 2 months