profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/jdmcd/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Jimmy McDermott jdmcd @gotranseo New York jdmcd.io CTO @gotranseo, on a mission to make educational planning processes more inclusive and equitable. @vapor maintainer.

jdmcd/BCPlan-iOS 2

iOS Application for the ISYS1021 Final Project

jdmcd/FrostedSidebar 1

Hamburger Menu using Swift and iOS 8 API's

bdweix/BCWebApps 0

Repository for BC Web Apps course

gotranseo/bugsnag 0

Report errors with Bugsnag 🐛

jdmcd/APNGKit 0

High performance and delightful way to play with APNG format in iOS.

jdmcd/awesome-vapor 0

A curated list of Vapor-related awesome projects.

jdmcd/BCPlan 0

Vapor Backend for the ISYS1021 Final Project

jdmcd/Burritos 0

A collection of Swift Property Wrappers (formerly "Property Delegates")

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

push eventvapor/queues

Jimmy McDermott

commit sha fc9b05b70172df362860988337bd78cb6a3ca978

First shot at conversion

view details

push time in 17 days

create barnchvapor/queues

branch : async

created branch time in 17 days

create barnchgotranseo/bugsnag

branch : async

created branch time in 17 days

push eventgotranseo/swift-pwned

Jimmy McDermott

commit sha 924a9fea968e9d690ed3e0979ef4666b22d929e6

Add async/await impl

view details

push time in 17 days

PullRequestReviewEvent

create barnchgotranseo/swift-pwned

branch : async

created branch time in 17 days

PR opened vapor-community/wkhtmltopdf

[WIP] Convert to async/await

Converts the main generatePDF to async/await

+50 -44

0 comment

3 changed files

pr created time in 17 days

push eventvapor-community/wkhtmltopdf

Jimmy McDermott

commit sha 8c5f02cade77f2be57074c5815220f73f2211c5b

Convert to async/await

view details

push time in 17 days

create barnchvapor-community/wkhtmltopdf

branch : async

created branch time in 17 days

PullRequestReviewEvent

push eventvalidationapp/swift-sdk

Jimmy McDermott

commit sha fbfdd9f3b2d50aa0b710c0c8d68cdc5eccc12e1d

Update to async await

view details

push time in 17 days

create barnchvalidationapp/swift-sdk

branch : async

created branch time in 17 days

push eventgotranseo/oneroster

Jimmy McDermott

commit sha d69da78175d08265278f8bd9357339e9ff0c9789

Async version

view details

push time in 17 days

create barnchgotranseo/oneroster

branch : async

created branch time in 17 days

issue openedvapor/mysql-nio

Remove hard crash from `MySQLData` with malformed data

Describe the bug

This section: https://github.com/vapor/mysql-nio/blob/main/Sources/MySQLNIO/MySQLData.swift#L448-L469 causes a hard crash (i.e. bang operator) when there is data that is malformed at the MySQL level. For example, a datetime that is non-nullable defaults to 0000-00-00 00:00:00.000000 which is 1. obviously invalid as a date and 2. causes the description property to crash.

To Reproduce

Do a SELECT from a table with a column with a date value set to 0000-00-00 00:00:00.000000

Expected behavior

It should not crash and ideally log out some kind of an error

Environment

MySQLNIO v 1.3.2

created time in 18 days

PullRequestReviewEvent

release gotranseo/oneroster

1.0.7

released time in a month

created taggotranseo/oneroster

tag1.0.7

A Swift library for interacting with the OneRoster API

created time in a month

push eventgotranseo/oneroster

Jimmy McDermott

commit sha 003d642a592a1aee6efa77f3607e30d64773b618

Add classes endpoint and update decoder

view details

push time in a month

issue closedvapor/queues

Retry delay

Would be nice to be able to be able to specify a delay between retries.

Maybe something like

protocol Job {
    func nextRetry(attempt: Int) -> Date
}
extension Job {
    func nextRetry(attempt: Int) -> Date { return Date() }
}

That would allow you to implement exponential backoff etc

closed time in a month

jnordberg

push eventvapor/queues

Kacper Kawecki

commit sha a0b96a560647fccba256d7e30c2a761bb54aa979

Allow delaying retires of failed job (#101) * Add nextRetryIn to job for delaying retries * Small improvements to docs and test * Jobs with retry delay 0 are pushed back to the queue * Clear jobs data before pushing data for retry * Fix indentation

view details

push time in a month

PR merged vapor/queues

Allow delaying retires of failed job semver-minor

Added possibility of delaying retires of failed jobs.

Example usage

struct SomeJob: Job {
    func dequeue(_ context: QueueContext, _ payload: Payload) -> EventLoopFuture<Void> {
        ....
    }

    // Exponential backoff
    func nextRetryIn(attempt: Int) -> Int {
        return pow(2, attempt)
    }
}
+155 -14

3 comments

4 changed files

kacperk

pr closed time in a month

PullRequestReviewEvent

Pull request review commentvapor/queues

Allow delaying retires of failed job

 public struct QueueWorker {                     }.flatten(on: self.queue.context.eventLoop).flatMapError { error in                         self.queue.logger.error("Failed to send error notification: \(error)")                         return self.queue.context.eventLoop.future()+                    }.flatMap {+                        logger.trace("Job done being run")+                        return self.queue.clear(id)                     }                 }             } else {-                logger.error("Job failed, retrying... \(error)", metadata: [-                    "job_id": .string(id.string),-                    "job_name": .string(name),-                    "queue": .string(self.queue.queueName.string)-                ])-                return self.run(+                return self.retry(                     id: id,                     name: name,                     job: job,                     payload: payload,                     logger: logger,-                    remainingTries: remainingTries - 1,-                    jobData: jobData+                    remainingTries: remainingTries,+                    attempts: attempts,+                    jobData: jobData,+                    error: error                 )             }         }     }++    private func retry(+            id: JobIdentifier,+            name: String,+            job: AnyJob,+            payload: [UInt8],+            logger: Logger,+            remainingTries: Int,+            attempts: Int?,+            jobData: JobData,+            error: Error+    ) -> EventLoopFuture<Void> {+        let attempts = attempts ?? 0+        let delayInSeconds = job._nextRetryIn(attempt: attempts + 1)+        if delayInSeconds == -1 {+            logger.error("Job failed, retrying... \(error)", metadata: [+                "job_id": .string(id.string),+                "job_name": .string(name),+                "queue": .string(self.queue.queueName.string)+            ])+            return self.run(+                    id: id,+                    name: name,+                    job: job,+                    payload: payload,+                    logger: logger,+                    remainingTries: remainingTries - 1 ,+                    attempts: attempts + 1,+                    jobData: jobData+            )+        } else {+            logger.error("Job failed, retrying in \(delayInSeconds)s... \(error)", metadata: [+                "job_id": .string(id.string),+                "job_name": .string(name),+                "queue": .string(self.queue.queueName.string)+            ])+            let storage = JobData(+                    payload: jobData.payload,+                    maxRetryCount: remainingTries - 1,+                    jobName: jobData.jobName,+                    delayUntil: Date(timeIntervalSinceNow: Double(delayInSeconds)),+                    queuedAt: jobData.queuedAt,+                    attempts: attempts + 1

Is the indentation funky here?

kacperk

comment created time in a month

Pull request review commentvapor/queues

Allow delaying retires of failed job

 public struct QueueWorker {                     }.flatten(on: self.queue.context.eventLoop).flatMapError { error in                         self.queue.logger.error("Failed to send error notification: \(error)")                         return self.queue.context.eventLoop.future()+                    }.flatMap {+                        logger.trace("Job done being run")+                        return self.queue.clear(id)                     }                 }             } else {-                logger.error("Job failed, retrying... \(error)", metadata: [-                    "job_id": .string(id.string),-                    "job_name": .string(name),-                    "queue": .string(self.queue.queueName.string)-                ])-                return self.run(+                return self.retry(                     id: id,                     name: name,                     job: job,                     payload: payload,                     logger: logger,-                    remainingTries: remainingTries - 1,-                    jobData: jobData+                    remainingTries: remainingTries,+                    attempts: attempts,+                    jobData: jobData,+                    error: error                 )             }         }     }++    private func retry(+            id: JobIdentifier,+            name: String,+            job: AnyJob,+            payload: [UInt8],+            logger: Logger,+            remainingTries: Int,+            attempts: Int?,+            jobData: JobData,+            error: Error+    ) -> EventLoopFuture<Void> {+        let attempts = attempts ?? 0+        let delayInSeconds = job._nextRetryIn(attempt: attempts + 1)+        if delayInSeconds == -1 {+            logger.error("Job failed, retrying... \(error)", metadata: [+                "job_id": .string(id.string),+                "job_name": .string(name),+                "queue": .string(self.queue.queueName.string)+            ])+            return self.run(+                    id: id,+                    name: name,+                    job: job,+                    payload: payload,+                    logger: logger,+                    remainingTries: remainingTries - 1 ,+                    attempts: attempts + 1,+                    jobData: jobData+            )+        } else {+            logger.error("Job failed, retrying in \(delayInSeconds)s... \(error)", metadata: [+                "job_id": .string(id.string),+                "job_name": .string(name),+                "queue": .string(self.queue.queueName.string)+            ])+            let storage = JobData(+                    payload: jobData.payload,+                    maxRetryCount: remainingTries - 1,+                    jobName: jobData.jobName,+                    delayUntil: Date(timeIntervalSinceNow: Double(delayInSeconds)),+                    queuedAt: jobData.queuedAt,+                    attempts: attempts + 1+            )+            return self.queue.set(id, to: storage).flatMap {

Does this also remove the id from the processing queue?

kacperk

comment created time in a month