A Swift NIO MQTT 3.1.1 Client
adam-fowler/soto-cognito-authentication 12
Authenticating with AWS Cognito for Vapor
Generate a signed URL or Request headers for submitting to Amazon Web Services.
adam-fowler/soto-cognito-authentication-kit 6
Authenticating with AWS Cognito
Compression/Decompression support for Swift NIO ByteBuffer
adam-fowler/s3-filesystem-kit 4
Swift File Manager for AWS S3
adam-fowler/sns-to-slack-lambda 3
Swift Lambda for publishing SNS messages to Slack
AWS SDK for the Swift programming language that works on Linux and Mac
adam-fowler/ses-forwarder-lambda 2
Swift based SES email forwarding lambda
adam-fowler/dictionary-encoder 1
Swift Dictionary Encoder, based off the JSON Encoder in Foundation
push eventswift-server/sswg
commit sha 69b8b6d78016ba7cd5192212ded1945fe6931799
update members (#53)
push time in an hour
startedkneekey23/InAppPurchaseLambda
started time in 7 hours
push eventsoto-project/soto
commit sha 70c898c2c61fe1ae113ea80cc908278b21936ff6
Update LICENSE
commit sha b8bb71ed45ec66f43d6f527a7df4091569ea3cf5
Update README.md
commit sha 9fe59996e276e5a23a76d5b7ca58e0cea4de5be0
Update models from aws-sdk-go v1.36.28
push time in 4 days
startedadam-fowler/mqtt-nio
started time in 6 days
startedalchemy-swift/alchemy
started time in 6 days
issue commentsoto-project/soto
Multiple MultipartUploadRequest causing memory spike
Fixed... I updated the UI each time I had uploaded progress and it looks like SwiftUI is deallocated memory after a few seconds, not instant as I expected when the view body is reloaded... Sorry for rising the issue. You can delete it :)
comment created time in 6 days
issue closedsoto-project/soto
Multiple MultipartUploadRequest causing memory spike
Is your feature request related to a problem? Please describe.
I want to do at least 5 images uploading at the same time. I'm making only one S3 instance with AWS configuration. The issue is that when I start the requests using s3.multipartUpload
, the memory spikes to 800mb-1GB. Also, I want to mention that I need to get the progress for each request, to display it to the user.
Describe the solution you'd like How can I do it more efficiently? Minimum memory usage... because in the future I want to support dynamic number, considering user uploading speed.
Describe alternatives you've considered For now, just 1 file at a time... which is not nice.
closed time in 6 days
nastasiuptaissue commentsoto-project/soto
Multiple MultipartUploadRequest causing memory spike
So I've looked at memory while running the S3Tests.testMultiPartUpload() test and it never seems to use much more than 2mb while loading a 11mb file. How are you measuring memory usage?
Does you upload code look like this?
let request = S3.CreateMultipartUploadRequest( bucket: name, key: name ) return s3.multipartUpload(request, filename: filename)
Yes it does... I changed partSize to 8mb instead of default values of 5mb ... Do you observe the progress? what about the always event to know when is ready or it had an issue? { (progress) in ... } and _ = multipartUpload.always { (uploadResult) in ....
comment created time in 6 days
issue commentsoto-project/soto
Multiple MultipartUploadRequest causing memory spike
Wow something is going wrong there.
There is no point using multipart for files smaller than 5mb though as that is the smallest size an individual part can be.
Well, the users will upload bigger files, between 10-15mb.. I use those small files for testing...
comment created time in 6 days
issue commentsoto-project/soto
Multiple MultipartUploadRequest causing memory spike
So the project is a macOS app, and the files are selected by the user (importing to the app using NSOpenPanel
comment created time in 6 days
issue commentsoto-project/soto
Multiple MultipartUploadRequest causing memory spike
Hey @adam-fowler, well, the images are small about 1mb up to 3-4mb... Those files are not very big... In the future, we plan to support maximum of 15mb images uploading. It's pretty strange what happens with the momery during the upload but after the upload is ready it goes back to a normal level... not the initial one before, even I deallocate the uploading workers.
comment created time in 6 days
issue openedsoto-project/soto
Multiple MultipartUploadRequest using the same S3 instance
Is your feature request related to a problem? Please describe.
I want to do at least 5 images uploading at the same time. I'm making only one S3 instance with AWS configuration. The issue is that when I start the requests using s3.multipartUpload
, the memory spikes to 800mb-1GB. Also, I want to mention that I need to get the progress for each request, to display it to the user.
Describe the solution you'd like How can I do it more efficiently? Minimum memory usage... because in the future I want to support dynamic number, considering user uploading speed.
Describe alternatives you've considered For now, just 1 file at a time... which is not nice.
Additional context
created time in 7 days
issue closedsoto-project/soto
Soto with Vapor and MinIO - Error with putObject() and file handle payload
I'm using Soto (5.0.0-rc.1.0) with a Vapor (4.35.0) project against MinIO (https://min.io). This is all local with MinIO installed from Homebrew and running on macOS with Xcode. I have set up a Vapor service per https://soto.codes/user-guides/using-soto-with-vapor.html and have success with Soto using putObject() and a String payload. I've also had success with getObject() and getObjectStreaming() pulling data back and writing out with NIOFileHandles. However, when using putObject() and an AWSPayload from a NIOFileHandle, I receive the following logging and error:
Request:
PutObject
PUT http://192.168.1.170:9000/projectimage/64C40105-BC8E-470F-915E-51F28805072D.png
Headers: [
x-amz-acl : public-read
Content-Length : 91639
user-agent : Soto/5.0
content-type : application/octet-stream
]
Body: raw (91639 bytes)
Response:
Status : 403
Headers: [
date : Thu, 14 Jan 2021 01:11:03 GMT
content-security-policy : block-all-mixed-content
vary : Origin
x-xss-protection : 1; mode=block
content-length : 475
server : MinIO
x-amz-request-id : 1659F4096A20C330
content-type : application/xml
accept-ranges : bytes
]
Body:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<Key>64C40105-BC8E-470F-915E-51F28805072D.png</Key>
<BucketName>projectimage</BucketName>
<Resource>/projectimage/64C40105-BC8E-470F-915E-51F28805072D.png</Resource
<RequestId>1659F4096A20C330</RequestId>
<HostId>5c9ff981-2895-4a6e-9847-53244f40bca7</HostId>
</Error>
Here is the Vapor route handler code:
func profilePicture(_ req: Request) throws -> EventLoopFuture<Response> {
struct Input: Content {
var file: File
}
// get user
let user = try req.auth.require(User.self)
// get input
let input = try req.content.decode(Input.self)
// get temp path for dir and file
let pathDir = req.localFileService.profilePicTempUploadPhysicalPath(userId: UUID.init())
let pathFile = pathDir + input.file.filename
// create temp path
try FileManager.default.createDirectory(atPath: pathDir, withIntermediateDirectories: true, attributes: [ : ])
// open file...
return req.application.fileio.openFile(path: pathFile, mode: .write, flags: .allowFileCreation(posixMode: 0x744), eventLoop: req.eventLoop)
.flatMap { handle in
// ...and write out...
return req.application.fileio.write(fileHandle: handle, buffer: input.file.data, eventLoop: req.eventLoop)
.flatMap { _ in
try! handle.close()
// ...reopen for reading...
return req.application.fileio.openFile(path: pathFile, mode: .read, eventLoop: req.eventLoop)
.flatMap { handle2 -> EventLoopFuture<Response> in
let fileAttributes = try! FileManager.default.attributesOfItem(atPath: pathFile)
let fileSizeNSNumber = fileAttributes[.size] as! NSNumber
let fileSize = fileSizeNSNumber.intValue
let bodyData = AWSPayload.fileHandle(handle2, offset: 0, size: fileSize, fileIO: req.application.fileio)
// ...send to S3...
let putObjectRequest = S3.PutObjectRequest(acl: .publicRead, body: bodyData, bucket: "projectimage", contentLength: Int64(bodyData.size ?? 0), key: "\( user.uuid ).png")
return req.remoteFileService.s3.putObject(putObjectRequest)
.map {putObjectOutput in
print(putObjectOutput)
try! handle2.close()
return req.redirect(to: "/profile")
}
}
}
}
}
Any suggestions on hunting down the SignatureDoesNotMatch
error?
closed time in 7 days
joshjacobissue commentsoto-project/soto
Soto with Vapor and MinIO - Error with putObject() and file handle payload
@adam-fowler Removing the content-length
solved the issue. Thank you for the quick replies and the tip about the alternate openFile
method!
comment created time in 7 days
issue commentsoto-project/soto
Soto with Vapor and MinIO - Error with putObject() and file handle payload
@adam-fowler Thanks for the reply and sorry for the oversight on the version update. I just updated to v5.1 and am seeing the same error.
comment created time in 7 days
issue openedsoto-project/soto
Soto with Vapor and MinIO - Error with putObject() and file handle payload
I'm using Soto (5.0.0-rc.1.0) with a Vapor (4.35.0) project against MinIO (https://min.io). This is all local with MinIO installed from Homebrew and running on macOS with Xcode. I have set up a Vapor service per https://soto.codes/user-guides/using-soto-with-vapor.html and have success with Soto using putObject() and a String payload. I've also had success with getObject() and getObjectStreaming() pulling data back and writing out with NIOFileHandles. However, when using putObject() and an AWSPayload from a NIOFileHandle, I receive the following logging and error:
Request:
PutObject
PUT http://192.168.1.170:9000/projectimage/64C40105-BC8E-470F-915E-51F28805072D.png
Headers: [
x-amz-acl : public-read
Content-Length : 91639
user-agent : Soto/5.0
content-type : application/octet-stream
]
Body: raw (91639 bytes)
Response:
Status : 403
Headers: [
date : Thu, 14 Jan 2021 01:11:03 GMT
content-security-policy : block-all-mixed-content
vary : Origin
x-xss-protection : 1; mode=block
content-length : 475
server : MinIO
x-amz-request-id : 1659F4096A20C330
content-type : application/xml
accept-ranges : bytes
]
Body:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<Key>64C40105-BC8E-470F-915E-51F28805072D.png</Key>
<BucketName>projectimage</BucketName>
<Resource>/projectimage/64C40105-BC8E-470F-915E-51F28805072D.png</Resource
<RequestId>1659F4096A20C330</RequestId>
<HostId>5c9ff981-2895-4a6e-9847-53244f40bca7</HostId>
</Error>
Here is the Vapor route handler code:
func profilePicture(_ req: Request) throws -> EventLoopFuture<Response> {
struct Input: Content {
var file: File
}
// get user
let user = try req.auth.require(User.self)
// get input
let input = try req.content.decode(Input.self)
// get temp path for dir and file
let pathDir = req.localFileService.profilePicTempUploadPhysicalPath(userId: UUID.init())
let pathFile = pathDir + input.file.filename
// create temp path
try FileManager.default.createDirectory(atPath: pathDir, withIntermediateDirectories: true, attributes: [ : ])
// open file...
return req.application.fileio.openFile(path: pathFile, mode: .write, flags: .allowFileCreation(posixMode: 0x744), eventLoop: req.eventLoop)
.flatMap { handle in
// ...and write out...
return req.application.fileio.write(fileHandle: handle, buffer: input.file.data, eventLoop: req.eventLoop)
.flatMap { _ in
try! handle.close()
// ...reopen for reading...
return req.application.fileio.openFile(path: pathFile, mode: .read, eventLoop: req.eventLoop)
.flatMap { handle2 -> EventLoopFuture<Response> in
let fileAttributes = try! FileManager.default.attributesOfItem(atPath: pathFile)
let fileSizeNSNumber = fileAttributes[.size] as! NSNumber
let fileSize = fileSizeNSNumber.intValue
let bodyData = AWSPayload.fileHandle(handle2, offset: 0, size: fileSize, fileIO: req.application.fileio)
// ...send to S3...
let putObjectRequest = S3.PutObjectRequest(acl: .publicRead, body: bodyData, bucket: "projectimage", contentLength: Int64(bodyData.size ?? 0), key: "\( user.uuid ).png")
return req.remoteFileService.s3.putObject(putObjectRequest)
.map {putObjectOutput in
print(putObjectOutput)
try! handle2.close()
return req.redirect(to: "/profile")
}
}
}
}
}
Any suggestions on hunting down the SignatureDoesNotMatch
error?
created time in 8 days
fork ktoso/swift-source-compat-suite
The infrastructure and project index comprising the Swift source compatibility suite.
fork in 10 days
startedapple/swift-source-compat-suite
started time in 10 days
push eventsoto-project/soto
commit sha 6bb9b54fcc9e4399cd6929a747c9f523a95f02c2
Update models from aws-sdk-go v1.36.19 (#448) * Update models from aws-sdk-go v1.36.19 * Update Package.swift Co-authored-by: Adam Fowler <adamfowler71@gmail.com>
commit sha 03b37552fee5bed4240dde2aec7689832e86a0c0
Update CONTRIBUTORS.txt
commit sha ca44c42eef6c0ae4965274c53b81e3a994ce6b29
Update models from aws-sdk-go v1.36.23
push time in 11 days
PR opened soto-project/soto
Automated update of AWS service files from json model files in aws-sdk-go repository
pr created time in 11 days
issue commentsoto-project/soto
S3 Multipart Upload in the background for iOS
Hi @adam-fowler, thanks for the quick response.
As you mentioned above I could eventually implement my own AWSHTTPClient
using NSURLSession and set it into the AWSClient
object. However, as per I've understood, it won't be making use of the current multipart implementation on S3 service as I will have to re-implement the whole multipart upload request on that protocol.
Please correct me if I'm wrong as I don't see it as the best/practical solution. Thanks.
comment created time in 13 days
issue openedsoto-project/soto
S3 Multipart Upload in the background for iOS
Is your feature request related to a problem?
We are using SotoS3 to upload big files via multipartUpload
. However, the app needs to stay opened on the foreground during the upload process.
Describe the solution you'd like Keep the upload process running when the app goes to the background on iOS.
Describe alternatives you've considered
The only alternative I've found is to implement AWS Mobile SDK for iOS
to be able to use S3TransferUtility (which supports background transfers). I believe that background upload is not even available on the latest Amplify iOS SDK (at least there isn't anything documented).
Thank you very much in advance.
created time in 14 days
fork fabianfett/swift-corelibs-foundation
The Foundation Project, providing core utilities, internationalization, and OS independence
fork in 14 days
startedadam-fowler/aws-signer-v4
started time in 15 days