Python client for Google Music
RequireJS loader for Java properties files
Scala demo for Stackoverflow
AngularJS HTML5 Fullscreen
AngularJS directive for Medium.com editor clone
Modular seed project for Angular 2 apps with fast, statically typed build
A common project to consolidate all conversion efforts from various banks' export formats into YNAB's import format.
started time in 2 days
I think I'm currently running into the same (or at least related) issue to #1282.
We use the same AmazonS3 instance for a long time (essentially the full runtime of the service which measures in days to weeks) by providing it as a Bean in our Spring Boot application. Since the client is documented as thread safe and no documentation states we assumed that is safe.
The application processes images and pdfs and can sometimes run into an OOM situation. This is generally dealt with and the application does recover from it. However, it seems that this triggers a non-recoverable condition in the S3 client. #1282 pointed me to apache/httpcomponents-client@ca98ad6 and I think this is the root cause: Our application itself might recover but the connection pool is shut down. Now, I can't really argue with the behaviour of
httpclient however since there argument is solid for doing what they do but I still need a way out of this.
What would be the "proper" way to handle this from the standpoint of
aws-sdk-java? I have some ideas but none seem really good:
- Don't reuse the
AmazonS3instance but re-create it for each request. I have seen code like this floating aroung but I'm not sure on the costs of re-creating a client for each request. And I'm not even sure that this would fix it since it looks like the underlying connection pool is still shared.
AmazonS3.shutdownwhen I run into the
IllegalStateException. After looking at the code that seems to propagate down to the connection pool and that it will close it. But then what? Will a new pool be automatically created? And I would need to do that on every call site which would make the code rather ugly.
- Limit the lifetime of the
AmazonS3instance in my application and periodically create a new one, manually calling
shutdownon the old on. But that would them effect also ongoing connections. And it wouldn't really solve the problem for the time after the pool is shutdown and a new one is created but at least the application will recover eventually.
- Just give up, not handle the OOM condition and lets kubernetes restart my pod.
created time in a month
started time in 2 months
started time in 2 months
started time in 3 months