profile
viewpoint

Buzzardo/app-starters-release 0

Spring Cloud Stream App Starters and its Release Train

Buzzardo/asciidoctor-gradle-examples 0

A collection of example projects that demonstrates how to use the Asciidoctor Gradle plugin

Buzzardo/buzzardo.github.io 0

My own Github page, but I'm using it to show the rendered version of the Spring Style Guide (currently a preliminary draft).

Buzzardo/docbookrx 0

(An early version of) a DocBook to AsciiDoc converter written in Ruby.

Buzzardo/Documentation 0

The docs used for the Steeltoe Website

Buzzardo/getting-started-guides 0

Building a Functional Reactive REST Service :: Learn how to create a RESTful web service with Reactive Spring

Buzzardo/getting-started-macros 0

Collection of macros used to support getting started guides

issue commentasciidoctor/asciidoctor-epub3

Feature request: Support for ToC in the output

Thanks much. :)

Buzzardo

comment created time in 5 days

push eventspring-guides/gs-rest-hateoas

Adib Saikali

commit sha 2b33e3b115700b5537322a79fb072fd017ccb39e

Upadate text to reference RepresentationModel instead of ResourceSupport

view details

push time in 6 days

PR opened spring-projects/spring-data-rest

Wording changes

Replace potentially insensitive language with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] There is a ticket in the bug tracker for the project in our JIRA.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [ ] You submit test cases (unit or integration tests) that back your changes.
  • [ ] You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
+5 -5

0 comment

5 changed files

pr created time in 11 days

PR opened spring-projects/spring-data-solr

Wording change

Changed "executes" to the more friendly "runs".

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] There is a ticket in the bug tracker for the project in our JIRA.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [ ] You submit test cases (unit or integration tests) that back your changes.
  • [ ] You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
+1 -1

0 comment

1 changed file

pr created time in 11 days

create barnchBuzzardo/spring-data-solr

branch : wording-changes

created branch time in 11 days

create barnchBuzzardo/spring-data-rest

branch : wording-fix

created branch time in 11 days

create barnchBuzzardo/spring-data-redis

branch : wording-fix

created branch time in 11 days

PR opened spring-projects/spring-data-redis

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] There is a ticket in the bug tracker for the project in our JIRA.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [ ] You submit test cases (unit or integration tests) that back your changes.
  • [ ] You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
+19 -19

0 comment

10 changed files

pr created time in 11 days

PR opened spring-projects/spring-data-r2dbc

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

+11 -12

0 comment

4 changed files

pr created time in 11 days

create barnchBuzzardo/spring-data-r2dbc

branch : wording-fix

created branch time in 11 days

create barnchBuzzardo/spring-data-mongodb

branch : wording-fix

created branch time in 11 days

PR opened spring-projects/spring-data-mongodb

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] There is a ticket in the bug tracker for the project in our JIRA.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [ ] You submit test cases (unit or integration tests) that back your changes.
  • [ ] You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
+51 -50

0 comment

11 changed files

pr created time in 11 days

PR opened spring-projects/spring-data-ldap

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] There is a ticket in the bug tracker for the project in our JIRA.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [ ] You submit test cases (unit or integration tests) that back your changes.
  • [ ] You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
+2 -2

0 comment

1 changed file

pr created time in 11 days

create barnchBuzzardo/spring-data-ldap

branch : wording-fix

created branch time in 11 days

PR opened spring-projects/spring-data-keyvalue

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] There is a ticket in the bug tracker for the project in our JIRA.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [ ] You submit test cases (unit or integration tests) that back your changes.
  • [ ] You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
+2 -2

0 comment

1 changed file

pr created time in 11 days

create barnchBuzzardo/spring-data-keyvalue

branch : wording-fix

created branch time in 11 days

PR opened spring-projects/spring-data-jpa

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] There is a ticket in the bug tracker for the project in our JIRA.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [ ] You submit test cases (unit or integration tests) that back your changes.
  • [ ] You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
+18 -20

0 comment

5 changed files

pr created time in 11 days

create barnchBuzzardo/spring-data-jpa

branch : wording-fix

created branch time in 11 days

create barnchBuzzardo/spring-data-jdbc

branch : wording-fix

created branch time in 11 days

PR opened spring-projects/spring-data-jdbc

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

+6 -6

0 comment

1 changed file

pr created time in 11 days

create barnchBuzzardo/spring-data-gemfire

branch : wording-fix

created branch time in 11 days

PR opened spring-projects/spring-data-gemfire

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] You created a JIRA ticket in the bug tracker for the project.
  • [ ] You formatted the code according to the source code style provided here and have specifically applied them to your changes. Do not submit any formatting related changes.
  • [ ] You submitted test cases (Unit or Integration Tests) backing your changes.
  • [ ] You added yourself as the author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
  • [ ] If applicable, you have complied with and taken steps necessary to report any security vulnerabilities at Pivotal Security Reporting.
+55 -65

0 comment

10 changed files

pr created time in 11 days

create barnchBuzzardo/spring-data-commons

branch : wording-fix

created branch time in 11 days

PR opened spring-projects/spring-data-commons

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] There is a ticket in the bug tracker for the project in our JIRA.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [ ] You submit test cases (unit or integration tests) that back your changes.
  • [ ] You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
+12 -10

0 comment

2 changed files

pr created time in 11 days

create barnchBuzzardo/spring-data-cassandra

branch : wording-fix

created branch time in 11 days

PR opened spring-projects/spring-data-cassandra

Wording changes

Removed the language of oppression and violence and replaced it with more neutral language.

Note that problematic words in the code have to remain in the docs until the code changes.

<!--

Thank you for proposing a pull request. This template will guide you through the essential steps necessary for a pull request. Make sure that:

-->

  • [ ] You have read the Spring Data contribution guidelines.
  • [ ] There is a ticket in the bug tracker for the project in our JIRA.
  • [ ] You use the code formatters provided here and have them applied to your changes. Don’t submit any formatting related changes.
  • [ ] You submit test cases (unit or integration tests) that back your changes.
  • [ ] You added yourself as author in the headers of the classes you touched. Amend the date range in the Apache license header if needed. For new types, add the license header (copy from another file and set the current year only).
+8 -8

0 comment

2 changed files

pr created time in 11 days

push eventBuzzardo/spring-boot

Stephane Nicoll

commit sha 5027a05b0e63ca77a572827e5239c29c3ca07f21

Upgrade to Johnzon Jsonb 1.2.4 Closes gh-21190

view details

Stephane Nicoll

commit sha 7820f0115ad5a9b8ff246b7cfc23f5dce530d4d1

Upgrade to Tomcat 9.0.34 Closes gh-21191

view details

Stephane Nicoll

commit sha e88ee06b5bee08c0718ae7161f1edbe96daff018

Upgrade to Groovy 2.5.11 Closes gh-21192

view details

Stephane Nicoll

commit sha 6182d83f8c84bee9adb25ac8a4e723fb6eabd071

Upgrade to Jetty 9.4.28.v20200408 Closes gh-21193

view details

Stephane Nicoll

commit sha e822c497efb86c7f21f88076ae125c0260dd47a4

Upgrade to Elasticsearch 6.8.8 Closes gh-21194

view details

Stephane Nicoll

commit sha 5668bf456a6be4873f3b917dcf6b83aeb9a410a3

Upgrade to Hibernate 5.4.14.Final Closes gh-21196

view details

Stephane Nicoll

commit sha 51cedc6225630a8e539eabce751be839684cc99e

Upgrade to Hibernate Validator 6.0.19.Final Closes gh-21197

view details

Stephane Nicoll

commit sha f45fd47a34e634392f9fa19f708484bfffd44c68

Upgrade to Infinispan 9.4.19.Final Closes gh-21198

view details

Stephane Nicoll

commit sha f621ac61fa9485ffd132811fa640998c542fe928

Upgrade to Kotlin 1.3.72 Closes gh-21199

view details

Stephane Nicoll

commit sha cacdfa443fcdc4dfde62eecdc50140a0a91a6b13

Upgrade to Liquibase 3.8.9 Closes gh-21200

view details

Stephane Nicoll

commit sha 65fc43865a9868cc8d1dfde768cbd8745f811c13

Upgrade to Neo4j Ogm 3.2.11 Closes gh-21201

view details

Stephane Nicoll

commit sha 423ec71d45cda3066495da3a3e54a90cfc7c8572

Upgrade to Postgresql 42.2.12 Closes gh-21202

view details

Stephane Nicoll

commit sha 4cc45f964c2d340b93ed230369f983d7d0257257

Upgrade to Spring Batch 4.2.2.RELEASE Closes gh-21203

view details

Stephane Nicoll

commit sha 47c26ef69dd0cfecf9f604397eef4a2277147e91

Upgrade to Spring Security 5.2.3.RELEASE Closes gh-21204

view details

Stephane Nicoll

commit sha 6ff7b812393eb204b3a387542cac5296fc9f514d

Upgrade to Spring Ws 3.0.9.RELEASE Closes gh-21205

view details

Stephane Nicoll

commit sha 71565a175e9b394e8178ebaad04ad5174b03ca9f

Merge branch '2.2.x'

view details

Brian Clozel

commit sha a63ab468a32b2f2d9d6350fd6abe5fe7daf91401

Upgrade to RSocket 1.0.0-RC7 This commit upgrades to RSocket 1.0.0-RC7. This new RC brings API changes we have to adapt to. As of this commit, we're introducing a new `RSocketServerCustomizer` which replaces the now deprecated `ServerRSocketFactoryProcessor`. Closes gh-21046

view details

Brian Clozel

commit sha dac62476a0465b8d4140b6f6cdedd015de9a49ca

Merge branch '2.2.x' Closes gh-21208

view details

Brian Clozel

commit sha 4c9c9ccd9184fe5e54dc8440b6e8a281308cc4d3

Upgrade to Spring Doc Resources 0.2.2.RELEASE Closes gh-21057

view details

dreis2211

commit sha 4b0a31acf89c88df2047556f68bb751bd64737c0

Delete Toml class See gh-21129

view details

push time in 11 days

create barnchBuzzardo/spring-boot

branch : wording-changes

created branch time in 11 days

PR opened spring-projects/spring-boot

Wording changes

Changed the language of oppression and violence to more neutral language.

Note that the code still has some of those terms, so the docs cannot change those until they change in the code.

<!-- Thanks for contributing to Spring Boot. Please review the following notes before submitting you pull request.

Please submit only genuine pull-requests. Do not use this repository as a GitHub playground.

Security Vulnerabilities

STOP! If your contribution fixes a security vulnerability, please do not submit it. Instead, please head over to https://pivotal.io/security to learn how to disclose a vulnerability responsibly.

Dependency Upgrades

Please do not open a pull request for a straightforward dependency upgrade (one that only updates the version property). We have a semi-automated process for such upgrades that we prefer to use. However, if the upgrade is more involved (such as requiring changes for removed or deprecated API) your pull request is most welcome.

Describing Your Changes

If, having reviewed the notes above, you're ready to submit your pull request, please provide a brief description of the proposed changes. If they fix a bug, please describe the broken behaviour and how the changes fix it. If they make an enhancement, please describe the new functionality and why you believe it's useful. If your pull request relates to any existing issues, please reference them by using the issue number prefixed with #. -->

+144 -145

0 comment

11 changed files

pr created time in 11 days

pull request commentspring-projects/spring-framework

Wording changes

I forgot about the Javadoc. Thanks for the reminder, Sam. I'll add it to my todo list.

Buzzardo

comment created time in 12 days

PR opened spring-projects/spring-framework

Wording changes

Changed the language of oppression and violence to more neutral terms.

Note that there are lots of code items (method names, for example) that also need to change.

+211 -211

0 comment

14 changed files

pr created time in 12 days

create barnchBuzzardo/spring-framework

branch : wording-changes

created branch time in 12 days

Pull request review commentspring-io/dataflow.spring.io

Link to reference for deployment properties

 The preceding example would be as follows in SCDF shell: stream deploy --name ticktock --properties "app.time.trigger.initial-delay=1,deployer.*.cpu=1,deployer.*.local.shutdown-timeout=60,deployer.*.memory=512,deployer.log.count=2,deployer.log.local.delete-files-on-exit=false,deployer.time.disk=512,spring.cloud.dataflow.skipper.platformName=local-debug" Deployment request has been sent for stream 'ticktock' ```++## Platform Specific Deployer Properties++### Cloud Foundry Deployer Properties++The Cloud Foundry Deployer maps task and application properties to environment variable `SPRING_APPLICATION_JSON` by default. This is defined in the generated Cloud Foundry application manifest.+The value is a JSON document and is a standard Spring Boot property source.+You can optionally configure the deployer to create top level environment variables by setting `deployer.<app>.cloudfoundry.use-spring-application-json=false`.+You may also add top-level environment variables explicitly using `deployer.<app>.cloudfoundry.env.<key>=<value>`. This is useful for adding [Java build pack configuration properties](https://github.com/cloudfoundry/java-buildpack) to the application manifest since the Java build pack applies its properties before the application starts and does not treat `SPRING_APPLICATION_JSON` as a special case.

Add "by" before "using".

dturanski

comment created time in 12 days

Pull request review commentspring-io/dataflow.spring.io

Link to reference for deployment properties

 description: 'Initiate a Batch deployment with deployment property overrides'  # Deployment Properties -When task definitions are launched to the target platforms `Local`, `CloudFoundry` and `Kubernetes`, you can provide the configuration properties that are applied to the task applications at launch time.+When task definitions are launched to the target platforms `local`, `cloudFoundry` and `kubernetes`, you can provide the configuration properties that are applied to the task applications at launch time. For instance you can specify:  - Deployer Properties - These properties customize how tasks are launched. - Application Properties - These are application specific properties. +<!--TIP-->++You can view the deployment properties for each of the platforms by selecting one of the following links: [local](https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-local-deployer), [cloudfoundry](https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-cloudfoundry-deployer) or, [kubernetes](https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-kubernetes-deployer).

Move the comma after "or" to before "or".

dturanski

comment created time in 12 days

Pull request review commentspring-io/dataflow.spring.io

Link to reference for deployment properties

 description: 'Initiate a Batch deployment with deployment property overrides'  # Deployment Properties -When task definitions are launched to the target platforms `Local`, `CloudFoundry` and `Kubernetes`, you can provide the configuration properties that are applied to the task applications at launch time.+When task definitions are launched to the target platforms `local`, `cloudFoundry` and `kubernetes`, you can provide the configuration properties that are applied to the task applications at launch time.

Change this sentence to:

When task definitions are launched to the target platforms (local, cloudFoundry, and kubernetes), you can provide the configuration properties that are applied to the task applications at launch time.

(The difference is the parentheses around the list and the serial comma in the list.)

dturanski

comment created time in 12 days

Pull request review commentspring-io/dataflow.spring.io

Link to reference for deployment properties

 The preceding example would be as follows in SCDF shell: stream deploy --name ticktock --properties "app.time.trigger.initial-delay=1,deployer.*.cpu=1,deployer.*.local.shutdown-timeout=60,deployer.*.memory=512,deployer.log.count=2,deployer.log.local.delete-files-on-exit=false,deployer.time.disk=512,spring.cloud.dataflow.skipper.platformName=local-debug" Deployment request has been sent for stream 'ticktock' ```++## Platform Specific Deployer Properties++### Cloud Foundry Deployer Properties++The Cloud Foundry Deployer maps task and application properties to environment variable `SPRING_APPLICATION_JSON` by default. This is defined in the generated Cloud Foundry application manifest.+The value is a JSON document and is a standard Spring Boot property source.+You can optionally configure the deployer to create top level environment variables by setting `deployer.<app>.cloudfoundry.use-spring-application-json=false`.

Hyphenate "top level" (so "top-level").

dturanski

comment created time in 12 days

Pull request review commentspring-io/dataflow.spring.io

Link to reference for deployment properties

 When deploying a stream, properties fall into two groups: - Properties that control how the apps are deployed to the target platform and that use a `deployer` prefix are referred to as _deployer properties_. - Properties that control or override how the application behave and that are set during stream creation are referred to as _application properties_. -You need to pick a defined platform configuration where each platform type (`local`, `cloudfoundry` or `kubernetes`) has a different set of possible deployment properties. Every platform has a set of generic properties for `memory`, `cpu`, and `disk` reservations and `count` to define how many instances should be created on that platform. The following image shows the Deploy Stream Definition view, where you can set these properties:+You need to pick a defined platform configuration where each platform type (`local`, `cloudfoundry` or `kubernetes`) has a different set of possible deployment properties. Every platform has a set of generic properties for `memory`, `cpu`, and `disk` reservations and `count` to define how many instances should be created on that platform.++<!--TIP-->++You can view the deployment properties for each of the platforms by selecting one of the following links: [local](https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-local-deployer), [cloudfoundry](https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-cloudfoundry-deployer) or, [kubernetes](https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-kubernetes-deployer).++<!--END_TIP-->++The following image shows the Deploy Stream Definition view, where you can set these properties:

The next four lines should be:

You can set the following properties:

  • Deployer Properties: These properties customize how tasks are launched.
  • Application Properties: These are application specific properties.

The following image shows the Deploy Stream Definition view:

dturanski

comment created time in 12 days

Pull request review commentspring-io/dataflow.spring.io

Link to reference for deployment properties

 The preceding example would be as follows in SCDF shell: stream deploy --name ticktock --properties "app.time.trigger.initial-delay=1,deployer.*.cpu=1,deployer.*.local.shutdown-timeout=60,deployer.*.memory=512,deployer.log.count=2,deployer.log.local.delete-files-on-exit=false,deployer.time.disk=512,spring.cloud.dataflow.skipper.platformName=local-debug" Deployment request has been sent for stream 'ticktock' ```++## Platform Specific Deployer Properties++### Cloud Foundry Deployer Properties++The Cloud Foundry Deployer maps task and application properties to environment variable `SPRING_APPLICATION_JSON` by default. This is defined in the generated Cloud Foundry application manifest.

Change the first sentence to: The Cloud Foundry Deployer maps task and application properties to an environment variable (by default, SPRING_APPLICATION_JSON).

dturanski

comment created time in 12 days

Pull request review commentspring-io/dataflow.spring.io

Link to reference for deployment properties

 When deploying a stream, properties fall into two groups: - Properties that control how the apps are deployed to the target platform and that use a `deployer` prefix are referred to as _deployer properties_. - Properties that control or override how the application behave and that are set during stream creation are referred to as _application properties_. -You need to pick a defined platform configuration where each platform type (`local`, `cloudfoundry` or `kubernetes`) has a different set of possible deployment properties. Every platform has a set of generic properties for `memory`, `cpu`, and `disk` reservations and `count` to define how many instances should be created on that platform. The following image shows the Deploy Stream Definition view, where you can set these properties:+You need to pick a defined platform configuration where each platform type (`local`, `cloudfoundry` or `kubernetes`) has a different set of possible deployment properties. Every platform has a set of generic properties for `memory`, `cpu`, and `disk` reservations and `count` to define how many instances should be created on that platform.

Add a comma after cloudfoundry.

dturanski

comment created time in 12 days

Pull request review commentspring-io/dataflow.spring.io

Link to reference for deployment properties

 When deploying a stream, properties fall into two groups: - Properties that control how the apps are deployed to the target platform and that use a `deployer` prefix are referred to as _deployer properties_. - Properties that control or override how the application behave and that are set during stream creation are referred to as _application properties_. -You need to pick a defined platform configuration where each platform type (`local`, `cloudfoundry` or `kubernetes`) has a different set of possible deployment properties. Every platform has a set of generic properties for `memory`, `cpu`, and `disk` reservations and `count` to define how many instances should be created on that platform. The following image shows the Deploy Stream Definition view, where you can set these properties:+You need to pick a defined platform configuration where each platform type (`local`, `cloudfoundry` or `kubernetes`) has a different set of possible deployment properties. Every platform has a set of generic properties for `memory`, `cpu`, and `disk` reservations and `count` to define how many instances should be created on that platform.++<!--TIP-->++You can view the deployment properties for each of the platforms by selecting one of the following links: [local](https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-local-deployer), [cloudfoundry](https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-cloudfoundry-deployer) or, [kubernetes](https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-kubernetes-deployer).

Move the comma after "or" to before "or".

dturanski

comment created time in 12 days

issue openedspring-projects/spring-data-jdbc-ext

Can't build

The project won't build. I think the version of Gradle is so old that it can no longer be found. It is on version 1.10, while 6.3 is the current version.

Here's the error (minus the stack trace):

Downloading http://services.gradle.org/distributions/gradle-1.10-bin.zip

Exception in thread "main" java.lang.RuntimeException: java.io.IOException: Server returned HTTP response code: 403 for URL: http://services.gradle.org/distributions/gradle-1.10-bin.zip

created time in 19 days

PR opened SteeltoeOSS/Documentation

Editing pass

Edited for spelling, punctuation, grammar, and voice.

+1329 -1249

0 comment

58 changed files

pr created time in 21 days

create barnchBuzzardo/Documentation

branch : editing

created branch time in 21 days

pull request commentspring-guides/gs-crud-with-vaadin

Upgrade to Vaadin 14.2.1

Thanks for keeping us current. :)

manolo

comment created time in a month

push eventspring-guides/gs-crud-with-vaadin

Manolo Carrasco

commit sha 9fa7bf5c073deea72e744decb0b2562f422758eb

Upgrade to Vaadin 14.2.1

view details

push time in a month

pull request commentspring-guides/tut-spring-boot-oauth2

Fix the typo

Thanks.

hannibal1296

comment created time in a month

push eventspring-guides/tut-spring-boot-oauth2

hannibal1296

commit sha 5cc3b36f0bf8d2a9093a1378bfcc87c22901ed01

Fix the typo

view details

push time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.++## <a id='start-shell'></a> Start the Spring Cloud Data Flow Shell++Before continuing with this topic, you need to download and connect the Spring Cloud Data Flow Shell as described in the [Connecting to SCDF for Kubernetes](connecting-scdf-for-kubernetes.html) topic.++```bash+$ java -jar spring-cloud-dataflow-shell-2.5.1.RELEASE.jar --dataflow.uri=http://data-flow.example.com+  ____                              ____ _                __+ / ___| _ __  _ __(_)_ __   __ _   / ___| | ___  _   _  __| |+ \___ \| '_ \| '__| | '_ \ / _` | | |   | |/ _ \| | | |/ _` |+  ___) | |_) | |  | | | | | (_| | | |___| | (_) | |_| | (_| |+ |____/| .__/|_|  |_|_| |_|\__, |  \____|_|\___/ \__,_|\__,_|+  ____ |_|    _          __|___/                 __________+ |  _ \  __ _| |_ __ _  |  ___| | _____      __  \ \ \ \ \ \+ | | | |/ _` | __/ _` | | |_  | |/ _ \ \ /\ / /   \ \ \ \ \ \+ | |_| | (_| | || (_| | |  _| | | (_) \ V  V /    / / / / / /+ |____/ \__,_|\__\__,_| |_|   |_|\___/ \_/\_/    /_/_/_/_/_/++2.5.1.RELEASE++Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help".+Successfully targeted http://data-flow.example.com+dataflow:>+```++## <a id='import-stream-apps'></a> Import the Spring Cloud Stream Applications++Import the stream app starters using the Data Flow shell’s `app import` command:++Depending on the message broker that you are using, there are two versions of these apps that you can register.+One version is for "RabbitMQ + Docker" and the other version is for "Apache Kafka + Docker".+See the [Register Supported Applications and Tasks](https://docs.spring.io/spring-cloud-dataflow/docs/2.5.1.RELEASE/reference/htmlsingle/#supported-apps-and-tasks) page for more detail.++For RabbitMQ use:++```bash+dataflow:>app import https://dataflow.spring.io/rabbitmq-docker-latest+Successfully registered 66 applications from [source.sftp.metadata, sink.throughput.metadata, processor.object-detection.metadata, sink.cassandra.metadata, source.loggregator.metadata, source.s3, processor.aggregator.metadata, sink.hdfs, sink.rabbit, sink.ftp.metadata, processor.tasklaunchrequest-transform.metadata, sink.pgcopy, processor.httpclient, sink.jdbc, source.tcp, source.s3.metadata, sink.jdbc.metadata, sink.mongodb.metadata, sink.tcp.metadata, source.mqtt, source.gemfire.metadata, sink.gemfire.metadata, source.load-generator.metadata, sink.log, sink.redis-pubsub, sink.task-launcher-dataflow, sink.pgcopy.metadata, processor.python-http.metadata, sink.counter.metadata, processor.grpc, processor.twitter-sentiment, sink.file.metadata, sink.s3.metadata, processor.python-http, processor.tcp-client, sink.hdfs.metadata, source.cdc-debezium.metadata, sink.sftp.metadata, sink.tcp, source.sftp, source.cdc-debezium, source.http, processor.groovy-filter.metadata, processor.splitter.metadata, source.syslog.metadata, processor.image-recognition, source.file, processor.bridge, processor.tensorflow, processor.tensorflow.metadata, sink.cassandra, processor.twitter-sentiment.metadata, processor.python-jython.metadata, source.time.metadata, source.tcp.metadata, source.sftp-dataflow.metadata, processor.transform.metadata, source.ftp.metadata, processor.scriptable-transform, source.triggertask.metadata, source.mqtt.metadata, processor.grpc.metadata, source.jms.metadata, source.syslog, source.file.metadata, processor.transform, source.time, processor.bridge.metadata, sink.s3, source.triggertask, source.gemfire-cq.metadata, source.trigger.metadata, source.jms, source.sftp-dataflow, source.mail, sink.mqtt.metadata, source.mongodb, source.rabbit, sink.router, source.ftp, sink.file, processor.groovy-transform.metadata, processor.counter.metadata, source.tcp-client, processor.scriptable-transform.metadata, processor.pose-estimation, processor.splitter, source.gemfire, sink.redis-pubsub.metadata, source.load-generator, source.loggregator, processor.aggregator, processor.groovy-transform, processor.object-detection, processor.python-jython, sink.throughput, processor.pose-estimation.metadata, sink.ftp, processor.filter.metadata, sink.mqtt, source.trigger, sink.gemfire, processor.header-enricher.metadata, sink.sftp, processor.filter, source.jdbc, source.gemfire-cq, source.twitterstream, sink.rabbit.metadata, sink.websocket.metadata, processor.httpclient.metadata, sink.log.metadata, processor.tasklaunchrequest-transform, processor.tcp-client.metadata, sink.websocket, processor.image-recognition.metadata, source.jdbc.metadata, source.mail.metadata, source.rabbit.metadata, source.tcp-client.metadata, processor.counter, processor.pmml, source.http.metadata, processor.groovy-filter, sink.counter, source.twitterstream.metadata, processor.header-enricher, sink.task-launcher-dataflow.metadata, source.mongodb.metadata, processor.pmml.metadata, sink.router.metadata, sink.mongodb]+```++For Apache Kafka use:++```bash+dataflow:>app import https://dataflow.spring.io/kafka-docker-latest+Successfully registered 66 applications from [source.sftp.metadata, sink.throughput.metadata, processor.object-detection.metadata, sink.cassandra.metadata, source.loggregator.metadata, source.s3, processor.aggregator.metadata, sink.hdfs, sink.rabbit, sink.ftp.metadata, processor.tasklaunchrequest-transform.metadata, sink.pgcopy, processor.httpclient, sink.jdbc, source.tcp, source.s3.metadata, sink.jdbc.metadata, sink.mongodb.metadata, sink.tcp.metadata, source.mqtt, source.gemfire.metadata, sink.gemfire.metadata, source.load-generator.metadata, sink.log, sink.redis-pubsub, sink.task-launcher-dataflow, sink.pgcopy.metadata, processor.python-http.metadata, sink.counter.metadata, processor.grpc, processor.twitter-sentiment, sink.file.metadata, sink.s3.metadata, processor.python-http, processor.tcp-client, sink.hdfs.metadata, source.cdc-debezium.metadata, sink.sftp.metadata, sink.tcp, source.sftp, source.cdc-debezium, source.http, processor.groovy-filter.metadata, processor.splitter.metadata, source.syslog.metadata, processor.image-recognition, source.file, processor.bridge, processor.tensorflow, processor.tensorflow.metadata, sink.cassandra, processor.twitter-sentiment.metadata, processor.python-jython.metadata, source.time.metadata, source.tcp.metadata, source.sftp-dataflow.metadata, processor.transform.metadata, source.ftp.metadata, processor.scriptable-transform, source.triggertask.metadata, source.mqtt.metadata, processor.grpc.metadata, source.jms.metadata, source.syslog, source.file.metadata, processor.transform, source.time, processor.bridge.metadata, sink.s3, source.triggertask, source.gemfire-cq.metadata, source.trigger.metadata, source.jms, source.sftp-dataflow, source.mail, sink.mqtt.metadata, source.mongodb, source.rabbit, sink.router, source.ftp, sink.file, processor.groovy-transform.metadata, processor.counter.metadata, source.tcp-client, processor.scriptable-transform.metadata, processor.pose-estimation, processor.splitter, source.gemfire, sink.redis-pubsub.metadata, source.load-generator, source.loggregator, processor.aggregator, processor.groovy-transform, processor.object-detection, processor.python-jython, sink.throughput, processor.pose-estimation.metadata, sink.ftp, processor.filter.metadata, sink.mqtt, source.trigger, sink.gemfire, processor.header-enricher.metadata, sink.sftp, processor.filter, source.jdbc, source.gemfire-cq, source.twitterstream, sink.rabbit.metadata, sink.websocket.metadata, processor.httpclient.metadata, sink.log.metadata, processor.tasklaunchrequest-transform, processor.tcp-client.metadata, sink.websocket, processor.image-recognition.metadata, source.jdbc.metadata, source.mail.metadata, source.rabbit.metadata, source.tcp-client.metadata, processor.counter, processor.pmml, source.http.metadata, processor.groovy-filter, sink.counter, source.twitterstream.metadata, processor.header-enricher, sink.task-launcher-dataflow.metadata, source.mongodb.metadata, processor.pmml.metadata, sink.router.metadata, sink.mongodb]+```++## <a id='create-stream'></a> Create the Stream Definition++With the app starters imported, you can use three apps (the `http` source, the `split` processor, and the `log` sink) to create a stream that consumes data via an HTTP POST request, processes it by splitting it into words, and outputs the results in logs.++Create the stream using the Data Flow shell’s `stream create` command:

Add "by" before "using".

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.++## <a id='start-shell'></a> Start the Spring Cloud Data Flow Shell++Before continuing with this topic, you need to download and connect the Spring Cloud Data Flow Shell as described in the [Connecting to SCDF for Kubernetes](connecting-scdf-for-kubernetes.html) topic.++```bash+$ java -jar spring-cloud-dataflow-shell-2.5.1.RELEASE.jar --dataflow.uri=http://data-flow.example.com+  ____                              ____ _                __+ / ___| _ __  _ __(_)_ __   __ _   / ___| | ___  _   _  __| |+ \___ \| '_ \| '__| | '_ \ / _` | | |   | |/ _ \| | | |/ _` |+  ___) | |_) | |  | | | | | (_| | | |___| | (_) | |_| | (_| |+ |____/| .__/|_|  |_|_| |_|\__, |  \____|_|\___/ \__,_|\__,_|+  ____ |_|    _          __|___/                 __________+ |  _ \  __ _| |_ __ _  |  ___| | _____      __  \ \ \ \ \ \+ | | | |/ _` | __/ _` | | |_  | |/ _ \ \ /\ / /   \ \ \ \ \ \+ | |_| | (_| | || (_| | |  _| | | (_) \ V  V /    / / / / / /+ |____/ \__,_|\__\__,_| |_|   |_|\___/ \_/\_/    /_/_/_/_/_/++2.5.1.RELEASE++Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help".+Successfully targeted http://data-flow.example.com+dataflow:>+```++## <a id='import-stream-apps'></a> Import the Spring Cloud Stream Applications++Import the stream app starters using the Data Flow shell’s `app import` command:++Depending on the message broker that you are using, there are two versions of these apps that you can register.+One version is for "RabbitMQ + Docker" and the other version is for "Apache Kafka + Docker".+See the [Register Supported Applications and Tasks](https://docs.spring.io/spring-cloud-dataflow/docs/2.5.1.RELEASE/reference/htmlsingle/#supported-apps-and-tasks) page for more detail.++For RabbitMQ use:++```bash+dataflow:>app import https://dataflow.spring.io/rabbitmq-docker-latest+Successfully registered 66 applications from [source.sftp.metadata, sink.throughput.metadata, processor.object-detection.metadata, sink.cassandra.metadata, source.loggregator.metadata, source.s3, processor.aggregator.metadata, sink.hdfs, sink.rabbit, sink.ftp.metadata, processor.tasklaunchrequest-transform.metadata, sink.pgcopy, processor.httpclient, sink.jdbc, source.tcp, source.s3.metadata, sink.jdbc.metadata, sink.mongodb.metadata, sink.tcp.metadata, source.mqtt, source.gemfire.metadata, sink.gemfire.metadata, source.load-generator.metadata, sink.log, sink.redis-pubsub, sink.task-launcher-dataflow, sink.pgcopy.metadata, processor.python-http.metadata, sink.counter.metadata, processor.grpc, processor.twitter-sentiment, sink.file.metadata, sink.s3.metadata, processor.python-http, processor.tcp-client, sink.hdfs.metadata, source.cdc-debezium.metadata, sink.sftp.metadata, sink.tcp, source.sftp, source.cdc-debezium, source.http, processor.groovy-filter.metadata, processor.splitter.metadata, source.syslog.metadata, processor.image-recognition, source.file, processor.bridge, processor.tensorflow, processor.tensorflow.metadata, sink.cassandra, processor.twitter-sentiment.metadata, processor.python-jython.metadata, source.time.metadata, source.tcp.metadata, source.sftp-dataflow.metadata, processor.transform.metadata, source.ftp.metadata, processor.scriptable-transform, source.triggertask.metadata, source.mqtt.metadata, processor.grpc.metadata, source.jms.metadata, source.syslog, source.file.metadata, processor.transform, source.time, processor.bridge.metadata, sink.s3, source.triggertask, source.gemfire-cq.metadata, source.trigger.metadata, source.jms, source.sftp-dataflow, source.mail, sink.mqtt.metadata, source.mongodb, source.rabbit, sink.router, source.ftp, sink.file, processor.groovy-transform.metadata, processor.counter.metadata, source.tcp-client, processor.scriptable-transform.metadata, processor.pose-estimation, processor.splitter, source.gemfire, sink.redis-pubsub.metadata, source.load-generator, source.loggregator, processor.aggregator, processor.groovy-transform, processor.object-detection, processor.python-jython, sink.throughput, processor.pose-estimation.metadata, sink.ftp, processor.filter.metadata, sink.mqtt, source.trigger, sink.gemfire, processor.header-enricher.metadata, sink.sftp, processor.filter, source.jdbc, source.gemfire-cq, source.twitterstream, sink.rabbit.metadata, sink.websocket.metadata, processor.httpclient.metadata, sink.log.metadata, processor.tasklaunchrequest-transform, processor.tcp-client.metadata, sink.websocket, processor.image-recognition.metadata, source.jdbc.metadata, source.mail.metadata, source.rabbit.metadata, source.tcp-client.metadata, processor.counter, processor.pmml, source.http.metadata, processor.groovy-filter, sink.counter, source.twitterstream.metadata, processor.header-enricher, sink.task-launcher-dataflow.metadata, source.mongodb.metadata, processor.pmml.metadata, sink.router.metadata, sink.mongodb]+```++For Apache Kafka use:++```bash+dataflow:>app import https://dataflow.spring.io/kafka-docker-latest+Successfully registered 66 applications from [source.sftp.metadata, sink.throughput.metadata, processor.object-detection.metadata, sink.cassandra.metadata, source.loggregator.metadata, source.s3, processor.aggregator.metadata, sink.hdfs, sink.rabbit, sink.ftp.metadata, processor.tasklaunchrequest-transform.metadata, sink.pgcopy, processor.httpclient, sink.jdbc, source.tcp, source.s3.metadata, sink.jdbc.metadata, sink.mongodb.metadata, sink.tcp.metadata, source.mqtt, source.gemfire.metadata, sink.gemfire.metadata, source.load-generator.metadata, sink.log, sink.redis-pubsub, sink.task-launcher-dataflow, sink.pgcopy.metadata, processor.python-http.metadata, sink.counter.metadata, processor.grpc, processor.twitter-sentiment, sink.file.metadata, sink.s3.metadata, processor.python-http, processor.tcp-client, sink.hdfs.metadata, source.cdc-debezium.metadata, sink.sftp.metadata, sink.tcp, source.sftp, source.cdc-debezium, source.http, processor.groovy-filter.metadata, processor.splitter.metadata, source.syslog.metadata, processor.image-recognition, source.file, processor.bridge, processor.tensorflow, processor.tensorflow.metadata, sink.cassandra, processor.twitter-sentiment.metadata, processor.python-jython.metadata, source.time.metadata, source.tcp.metadata, source.sftp-dataflow.metadata, processor.transform.metadata, source.ftp.metadata, processor.scriptable-transform, source.triggertask.metadata, source.mqtt.metadata, processor.grpc.metadata, source.jms.metadata, source.syslog, source.file.metadata, processor.transform, source.time, processor.bridge.metadata, sink.s3, source.triggertask, source.gemfire-cq.metadata, source.trigger.metadata, source.jms, source.sftp-dataflow, source.mail, sink.mqtt.metadata, source.mongodb, source.rabbit, sink.router, source.ftp, sink.file, processor.groovy-transform.metadata, processor.counter.metadata, source.tcp-client, processor.scriptable-transform.metadata, processor.pose-estimation, processor.splitter, source.gemfire, sink.redis-pubsub.metadata, source.load-generator, source.loggregator, processor.aggregator, processor.groovy-transform, processor.object-detection, processor.python-jython, sink.throughput, processor.pose-estimation.metadata, sink.ftp, processor.filter.metadata, sink.mqtt, source.trigger, sink.gemfire, processor.header-enricher.metadata, sink.sftp, processor.filter, source.jdbc, source.gemfire-cq, source.twitterstream, sink.rabbit.metadata, sink.websocket.metadata, processor.httpclient.metadata, sink.log.metadata, processor.tasklaunchrequest-transform, processor.tcp-client.metadata, sink.websocket, processor.image-recognition.metadata, source.jdbc.metadata, source.mail.metadata, source.rabbit.metadata, source.tcp-client.metadata, processor.counter, processor.pmml, source.http.metadata, processor.groovy-filter, sink.counter, source.twitterstream.metadata, processor.header-enricher, sink.task-launcher-dataflow.metadata, source.mongodb.metadata, processor.pmml.metadata, sink.router.metadata, sink.mongodb]+```++## <a id='create-stream'></a> Create the Stream Definition++With the app starters imported, you can use three apps (the `http` source, the `split` processor, and the `log` sink) to create a stream that consumes data via an HTTP POST request, processes it by splitting it into words, and outputs the results in logs.++Create the stream using the Data Flow shell’s `stream create` command:++```bash+dataflow:>stream create --name words --definition "http | splitter --expression=payload.split(' ') | log"+Created new stream 'words'+```++## <a id='deploy-stream'></a> Deploy the Stream++Next, deploy the stream, using the `stream deploy` command:++```bash+dataflow:>stream deploy words --properties deployer.http.kubernetes.createLoadBalancer=true+Deployment request has been sent for stream 'words'+```++In order to be able to post HTTP requests to the `http` app, we need to specify a deployment property requesting a `LoadBalancer` for the apps service.++You can run the `kubectl get pods` command from a different terminal window to see the application pods deployed as part of the stream:++```bash+$ kubectl get pods -l role=spring-app+NAME                                 READY   STATUS    RESTARTS   AGE+words-http-v1-7c977c4965-5q27m       1/1     Running   0          5m8s+words-log-v1-855f6ddd69-slmnn        1/1     Running   0          5m8s+words-splitter-v1-5466fdf6d4-lrz2c   1/1     Running   0          5m8s+```++You can also check the stream status from the shell using the `stream list` command:

Add "by" before "using".

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.++## <a id='start-shell'></a> Start the Spring Cloud Data Flow Shell++Before continuing with this topic, you need to download and connect the Spring Cloud Data Flow Shell as described in the [Connecting to SCDF for Kubernetes](connecting-scdf-for-kubernetes.html) topic.++```bash+$ java -jar spring-cloud-dataflow-shell-2.5.1.RELEASE.jar --dataflow.uri=http://data-flow.example.com+  ____                              ____ _                __+ / ___| _ __  _ __(_)_ __   __ _   / ___| | ___  _   _  __| |+ \___ \| '_ \| '__| | '_ \ / _` | | |   | |/ _ \| | | |/ _` |+  ___) | |_) | |  | | | | | (_| | | |___| | (_) | |_| | (_| |+ |____/| .__/|_|  |_|_| |_|\__, |  \____|_|\___/ \__,_|\__,_|+  ____ |_|    _          __|___/                 __________+ |  _ \  __ _| |_ __ _  |  ___| | _____      __  \ \ \ \ \ \+ | | | |/ _` | __/ _` | | |_  | |/ _ \ \ /\ / /   \ \ \ \ \ \+ | |_| | (_| | || (_| | |  _| | | (_) \ V  V /    / / / / / /+ |____/ \__,_|\__\__,_| |_|   |_|\___/ \_/\_/    /_/_/_/_/_/++2.5.1.RELEASE++Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help".+Successfully targeted http://data-flow.example.com+dataflow:>+```++## <a id='import-stream-apps'></a> Import the Spring Cloud Stream Applications++Import the stream app starters using the Data Flow shell’s `app import` command:++Depending on the message broker that you are using, there are two versions of these apps that you can register.+One version is for "RabbitMQ + Docker" and the other version is for "Apache Kafka + Docker".+See the [Register Supported Applications and Tasks](https://docs.spring.io/spring-cloud-dataflow/docs/2.5.1.RELEASE/reference/htmlsingle/#supported-apps-and-tasks) page for more detail.++For RabbitMQ use:++```bash+dataflow:>app import https://dataflow.spring.io/rabbitmq-docker-latest+Successfully registered 66 applications from [source.sftp.metadata, sink.throughput.metadata, processor.object-detection.metadata, sink.cassandra.metadata, source.loggregator.metadata, source.s3, processor.aggregator.metadata, sink.hdfs, sink.rabbit, sink.ftp.metadata, processor.tasklaunchrequest-transform.metadata, sink.pgcopy, processor.httpclient, sink.jdbc, source.tcp, source.s3.metadata, sink.jdbc.metadata, sink.mongodb.metadata, sink.tcp.metadata, source.mqtt, source.gemfire.metadata, sink.gemfire.metadata, source.load-generator.metadata, sink.log, sink.redis-pubsub, sink.task-launcher-dataflow, sink.pgcopy.metadata, processor.python-http.metadata, sink.counter.metadata, processor.grpc, processor.twitter-sentiment, sink.file.metadata, sink.s3.metadata, processor.python-http, processor.tcp-client, sink.hdfs.metadata, source.cdc-debezium.metadata, sink.sftp.metadata, sink.tcp, source.sftp, source.cdc-debezium, source.http, processor.groovy-filter.metadata, processor.splitter.metadata, source.syslog.metadata, processor.image-recognition, source.file, processor.bridge, processor.tensorflow, processor.tensorflow.metadata, sink.cassandra, processor.twitter-sentiment.metadata, processor.python-jython.metadata, source.time.metadata, source.tcp.metadata, source.sftp-dataflow.metadata, processor.transform.metadata, source.ftp.metadata, processor.scriptable-transform, source.triggertask.metadata, source.mqtt.metadata, processor.grpc.metadata, source.jms.metadata, source.syslog, source.file.metadata, processor.transform, source.time, processor.bridge.metadata, sink.s3, source.triggertask, source.gemfire-cq.metadata, source.trigger.metadata, source.jms, source.sftp-dataflow, source.mail, sink.mqtt.metadata, source.mongodb, source.rabbit, sink.router, source.ftp, sink.file, processor.groovy-transform.metadata, processor.counter.metadata, source.tcp-client, processor.scriptable-transform.metadata, processor.pose-estimation, processor.splitter, source.gemfire, sink.redis-pubsub.metadata, source.load-generator, source.loggregator, processor.aggregator, processor.groovy-transform, processor.object-detection, processor.python-jython, sink.throughput, processor.pose-estimation.metadata, sink.ftp, processor.filter.metadata, sink.mqtt, source.trigger, sink.gemfire, processor.header-enricher.metadata, sink.sftp, processor.filter, source.jdbc, source.gemfire-cq, source.twitterstream, sink.rabbit.metadata, sink.websocket.metadata, processor.httpclient.metadata, sink.log.metadata, processor.tasklaunchrequest-transform, processor.tcp-client.metadata, sink.websocket, processor.image-recognition.metadata, source.jdbc.metadata, source.mail.metadata, source.rabbit.metadata, source.tcp-client.metadata, processor.counter, processor.pmml, source.http.metadata, processor.groovy-filter, sink.counter, source.twitterstream.metadata, processor.header-enricher, sink.task-launcher-dataflow.metadata, source.mongodb.metadata, processor.pmml.metadata, sink.router.metadata, sink.mongodb]+```++For Apache Kafka use:++```bash+dataflow:>app import https://dataflow.spring.io/kafka-docker-latest+Successfully registered 66 applications from [source.sftp.metadata, sink.throughput.metadata, processor.object-detection.metadata, sink.cassandra.metadata, source.loggregator.metadata, source.s3, processor.aggregator.metadata, sink.hdfs, sink.rabbit, sink.ftp.metadata, processor.tasklaunchrequest-transform.metadata, sink.pgcopy, processor.httpclient, sink.jdbc, source.tcp, source.s3.metadata, sink.jdbc.metadata, sink.mongodb.metadata, sink.tcp.metadata, source.mqtt, source.gemfire.metadata, sink.gemfire.metadata, source.load-generator.metadata, sink.log, sink.redis-pubsub, sink.task-launcher-dataflow, sink.pgcopy.metadata, processor.python-http.metadata, sink.counter.metadata, processor.grpc, processor.twitter-sentiment, sink.file.metadata, sink.s3.metadata, processor.python-http, processor.tcp-client, sink.hdfs.metadata, source.cdc-debezium.metadata, sink.sftp.metadata, sink.tcp, source.sftp, source.cdc-debezium, source.http, processor.groovy-filter.metadata, processor.splitter.metadata, source.syslog.metadata, processor.image-recognition, source.file, processor.bridge, processor.tensorflow, processor.tensorflow.metadata, sink.cassandra, processor.twitter-sentiment.metadata, processor.python-jython.metadata, source.time.metadata, source.tcp.metadata, source.sftp-dataflow.metadata, processor.transform.metadata, source.ftp.metadata, processor.scriptable-transform, source.triggertask.metadata, source.mqtt.metadata, processor.grpc.metadata, source.jms.metadata, source.syslog, source.file.metadata, processor.transform, source.time, processor.bridge.metadata, sink.s3, source.triggertask, source.gemfire-cq.metadata, source.trigger.metadata, source.jms, source.sftp-dataflow, source.mail, sink.mqtt.metadata, source.mongodb, source.rabbit, sink.router, source.ftp, sink.file, processor.groovy-transform.metadata, processor.counter.metadata, source.tcp-client, processor.scriptable-transform.metadata, processor.pose-estimation, processor.splitter, source.gemfire, sink.redis-pubsub.metadata, source.load-generator, source.loggregator, processor.aggregator, processor.groovy-transform, processor.object-detection, processor.python-jython, sink.throughput, processor.pose-estimation.metadata, sink.ftp, processor.filter.metadata, sink.mqtt, source.trigger, sink.gemfire, processor.header-enricher.metadata, sink.sftp, processor.filter, source.jdbc, source.gemfire-cq, source.twitterstream, sink.rabbit.metadata, sink.websocket.metadata, processor.httpclient.metadata, sink.log.metadata, processor.tasklaunchrequest-transform, processor.tcp-client.metadata, sink.websocket, processor.image-recognition.metadata, source.jdbc.metadata, source.mail.metadata, source.rabbit.metadata, source.tcp-client.metadata, processor.counter, processor.pmml, source.http.metadata, processor.groovy-filter, sink.counter, source.twitterstream.metadata, processor.header-enricher, sink.task-launcher-dataflow.metadata, source.mongodb.metadata, processor.pmml.metadata, sink.router.metadata, sink.mongodb]+```++## <a id='create-stream'></a> Create the Stream Definition++With the app starters imported, you can use three apps (the `http` source, the `split` processor, and the `log` sink) to create a stream that consumes data via an HTTP POST request, processes it by splitting it into words, and outputs the results in logs.++Create the stream using the Data Flow shell’s `stream create` command:++```bash+dataflow:>stream create --name words --definition "http | splitter --expression=payload.split(' ') | log"+Created new stream 'words'+```++## <a id='deploy-stream'></a> Deploy the Stream++Next, deploy the stream, using the `stream deploy` command:

Add "by" before "using".

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.++## <a id='start-shell'></a> Start the Spring Cloud Data Flow Shell++Before continuing with this topic, you need to download and connect the Spring Cloud Data Flow Shell as described in the [Connecting to SCDF for Kubernetes](connecting-scdf-for-kubernetes.html) topic.++```bash+$ java -jar spring-cloud-dataflow-shell-2.5.1.RELEASE.jar --dataflow.uri=http://data-flow.example.com+  ____                              ____ _                __+ / ___| _ __  _ __(_)_ __   __ _   / ___| | ___  _   _  __| |+ \___ \| '_ \| '__| | '_ \ / _` | | |   | |/ _ \| | | |/ _` |+  ___) | |_) | |  | | | | | (_| | | |___| | (_) | |_| | (_| |+ |____/| .__/|_|  |_|_| |_|\__, |  \____|_|\___/ \__,_|\__,_|+  ____ |_|    _          __|___/                 __________+ |  _ \  __ _| |_ __ _  |  ___| | _____      __  \ \ \ \ \ \+ | | | |/ _` | __/ _` | | |_  | |/ _ \ \ /\ / /   \ \ \ \ \ \+ | |_| | (_| | || (_| | |  _| | | (_) \ V  V /    / / / / / /+ |____/ \__,_|\__\__,_| |_|   |_|\___/ \_/\_/    /_/_/_/_/_/++2.5.1.RELEASE++Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help".+Successfully targeted http://data-flow.example.com+dataflow:>+```++## <a id='import-stream-apps'></a> Import the Spring Cloud Stream Applications++Import the stream app starters using the Data Flow shell’s `app import` command:++Depending on the message broker that you are using, there are two versions of these apps that you can register.+One version is for "RabbitMQ + Docker" and the other version is for "Apache Kafka + Docker".+See the [Register Supported Applications and Tasks](https://docs.spring.io/spring-cloud-dataflow/docs/2.5.1.RELEASE/reference/htmlsingle/#supported-apps-and-tasks) page for more detail.++For RabbitMQ use:++```bash+dataflow:>app import https://dataflow.spring.io/rabbitmq-docker-latest+Successfully registered 66 applications from [source.sftp.metadata, sink.throughput.metadata, processor.object-detection.metadata, sink.cassandra.metadata, source.loggregator.metadata, source.s3, processor.aggregator.metadata, sink.hdfs, sink.rabbit, sink.ftp.metadata, processor.tasklaunchrequest-transform.metadata, sink.pgcopy, processor.httpclient, sink.jdbc, source.tcp, source.s3.metadata, sink.jdbc.metadata, sink.mongodb.metadata, sink.tcp.metadata, source.mqtt, source.gemfire.metadata, sink.gemfire.metadata, source.load-generator.metadata, sink.log, sink.redis-pubsub, sink.task-launcher-dataflow, sink.pgcopy.metadata, processor.python-http.metadata, sink.counter.metadata, processor.grpc, processor.twitter-sentiment, sink.file.metadata, sink.s3.metadata, processor.python-http, processor.tcp-client, sink.hdfs.metadata, source.cdc-debezium.metadata, sink.sftp.metadata, sink.tcp, source.sftp, source.cdc-debezium, source.http, processor.groovy-filter.metadata, processor.splitter.metadata, source.syslog.metadata, processor.image-recognition, source.file, processor.bridge, processor.tensorflow, processor.tensorflow.metadata, sink.cassandra, processor.twitter-sentiment.metadata, processor.python-jython.metadata, source.time.metadata, source.tcp.metadata, source.sftp-dataflow.metadata, processor.transform.metadata, source.ftp.metadata, processor.scriptable-transform, source.triggertask.metadata, source.mqtt.metadata, processor.grpc.metadata, source.jms.metadata, source.syslog, source.file.metadata, processor.transform, source.time, processor.bridge.metadata, sink.s3, source.triggertask, source.gemfire-cq.metadata, source.trigger.metadata, source.jms, source.sftp-dataflow, source.mail, sink.mqtt.metadata, source.mongodb, source.rabbit, sink.router, source.ftp, sink.file, processor.groovy-transform.metadata, processor.counter.metadata, source.tcp-client, processor.scriptable-transform.metadata, processor.pose-estimation, processor.splitter, source.gemfire, sink.redis-pubsub.metadata, source.load-generator, source.loggregator, processor.aggregator, processor.groovy-transform, processor.object-detection, processor.python-jython, sink.throughput, processor.pose-estimation.metadata, sink.ftp, processor.filter.metadata, sink.mqtt, source.trigger, sink.gemfire, processor.header-enricher.metadata, sink.sftp, processor.filter, source.jdbc, source.gemfire-cq, source.twitterstream, sink.rabbit.metadata, sink.websocket.metadata, processor.httpclient.metadata, sink.log.metadata, processor.tasklaunchrequest-transform, processor.tcp-client.metadata, sink.websocket, processor.image-recognition.metadata, source.jdbc.metadata, source.mail.metadata, source.rabbit.metadata, source.tcp-client.metadata, processor.counter, processor.pmml, source.http.metadata, processor.groovy-filter, sink.counter, source.twitterstream.metadata, processor.header-enricher, sink.task-launcher-dataflow.metadata, source.mongodb.metadata, processor.pmml.metadata, sink.router.metadata, sink.mongodb]+```++For Apache Kafka use:

Adda comma after "Kafka".

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.++## <a id='start-shell'></a> Start the Spring Cloud Data Flow Shell++Before continuing with this topic, you need to download and connect the Spring Cloud Data Flow Shell as described in the [Connecting to SCDF for Kubernetes](connecting-scdf-for-kubernetes.html) topic.++```bash+$ java -jar spring-cloud-dataflow-shell-2.5.1.RELEASE.jar --dataflow.uri=http://data-flow.example.com+  ____                              ____ _                __+ / ___| _ __  _ __(_)_ __   __ _   / ___| | ___  _   _  __| |+ \___ \| '_ \| '__| | '_ \ / _` | | |   | |/ _ \| | | |/ _` |+  ___) | |_) | |  | | | | | (_| | | |___| | (_) | |_| | (_| |+ |____/| .__/|_|  |_|_| |_|\__, |  \____|_|\___/ \__,_|\__,_|+  ____ |_|    _          __|___/                 __________+ |  _ \  __ _| |_ __ _  |  ___| | _____      __  \ \ \ \ \ \+ | | | |/ _` | __/ _` | | |_  | |/ _ \ \ /\ / /   \ \ \ \ \ \+ | |_| | (_| | || (_| | |  _| | | (_) \ V  V /    / / / / / /+ |____/ \__,_|\__\__,_| |_|   |_|\___/ \_/\_/    /_/_/_/_/_/++2.5.1.RELEASE++Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help".+Successfully targeted http://data-flow.example.com+dataflow:>+```++## <a id='import-stream-apps'></a> Import the Spring Cloud Stream Applications++Import the stream app starters using the Data Flow shell’s `app import` command:++Depending on the message broker that you are using, there are two versions of these apps that you can register.+One version is for "RabbitMQ + Docker" and the other version is for "Apache Kafka + Docker".+See the [Register Supported Applications and Tasks](https://docs.spring.io/spring-cloud-dataflow/docs/2.5.1.RELEASE/reference/htmlsingle/#supported-apps-and-tasks) page for more detail.++For RabbitMQ use:++```bash+dataflow:>app import https://dataflow.spring.io/rabbitmq-docker-latest+Successfully registered 66 applications from [source.sftp.metadata, sink.throughput.metadata, processor.object-detection.metadata, sink.cassandra.metadata, source.loggregator.metadata, source.s3, processor.aggregator.metadata, sink.hdfs, sink.rabbit, sink.ftp.metadata, processor.tasklaunchrequest-transform.metadata, sink.pgcopy, processor.httpclient, sink.jdbc, source.tcp, source.s3.metadata, sink.jdbc.metadata, sink.mongodb.metadata, sink.tcp.metadata, source.mqtt, source.gemfire.metadata, sink.gemfire.metadata, source.load-generator.metadata, sink.log, sink.redis-pubsub, sink.task-launcher-dataflow, sink.pgcopy.metadata, processor.python-http.metadata, sink.counter.metadata, processor.grpc, processor.twitter-sentiment, sink.file.metadata, sink.s3.metadata, processor.python-http, processor.tcp-client, sink.hdfs.metadata, source.cdc-debezium.metadata, sink.sftp.metadata, sink.tcp, source.sftp, source.cdc-debezium, source.http, processor.groovy-filter.metadata, processor.splitter.metadata, source.syslog.metadata, processor.image-recognition, source.file, processor.bridge, processor.tensorflow, processor.tensorflow.metadata, sink.cassandra, processor.twitter-sentiment.metadata, processor.python-jython.metadata, source.time.metadata, source.tcp.metadata, source.sftp-dataflow.metadata, processor.transform.metadata, source.ftp.metadata, processor.scriptable-transform, source.triggertask.metadata, source.mqtt.metadata, processor.grpc.metadata, source.jms.metadata, source.syslog, source.file.metadata, processor.transform, source.time, processor.bridge.metadata, sink.s3, source.triggertask, source.gemfire-cq.metadata, source.trigger.metadata, source.jms, source.sftp-dataflow, source.mail, sink.mqtt.metadata, source.mongodb, source.rabbit, sink.router, source.ftp, sink.file, processor.groovy-transform.metadata, processor.counter.metadata, source.tcp-client, processor.scriptable-transform.metadata, processor.pose-estimation, processor.splitter, source.gemfire, sink.redis-pubsub.metadata, source.load-generator, source.loggregator, processor.aggregator, processor.groovy-transform, processor.object-detection, processor.python-jython, sink.throughput, processor.pose-estimation.metadata, sink.ftp, processor.filter.metadata, sink.mqtt, source.trigger, sink.gemfire, processor.header-enricher.metadata, sink.sftp, processor.filter, source.jdbc, source.gemfire-cq, source.twitterstream, sink.rabbit.metadata, sink.websocket.metadata, processor.httpclient.metadata, sink.log.metadata, processor.tasklaunchrequest-transform, processor.tcp-client.metadata, sink.websocket, processor.image-recognition.metadata, source.jdbc.metadata, source.mail.metadata, source.rabbit.metadata, source.tcp-client.metadata, processor.counter, processor.pmml, source.http.metadata, processor.groovy-filter, sink.counter, source.twitterstream.metadata, processor.header-enricher, sink.task-launcher-dataflow.metadata, source.mongodb.metadata, processor.pmml.metadata, sink.router.metadata, sink.mongodb]+```++For Apache Kafka use:++```bash+dataflow:>app import https://dataflow.spring.io/kafka-docker-latest+Successfully registered 66 applications from [source.sftp.metadata, sink.throughput.metadata, processor.object-detection.metadata, sink.cassandra.metadata, source.loggregator.metadata, source.s3, processor.aggregator.metadata, sink.hdfs, sink.rabbit, sink.ftp.metadata, processor.tasklaunchrequest-transform.metadata, sink.pgcopy, processor.httpclient, sink.jdbc, source.tcp, source.s3.metadata, sink.jdbc.metadata, sink.mongodb.metadata, sink.tcp.metadata, source.mqtt, source.gemfire.metadata, sink.gemfire.metadata, source.load-generator.metadata, sink.log, sink.redis-pubsub, sink.task-launcher-dataflow, sink.pgcopy.metadata, processor.python-http.metadata, sink.counter.metadata, processor.grpc, processor.twitter-sentiment, sink.file.metadata, sink.s3.metadata, processor.python-http, processor.tcp-client, sink.hdfs.metadata, source.cdc-debezium.metadata, sink.sftp.metadata, sink.tcp, source.sftp, source.cdc-debezium, source.http, processor.groovy-filter.metadata, processor.splitter.metadata, source.syslog.metadata, processor.image-recognition, source.file, processor.bridge, processor.tensorflow, processor.tensorflow.metadata, sink.cassandra, processor.twitter-sentiment.metadata, processor.python-jython.metadata, source.time.metadata, source.tcp.metadata, source.sftp-dataflow.metadata, processor.transform.metadata, source.ftp.metadata, processor.scriptable-transform, source.triggertask.metadata, source.mqtt.metadata, processor.grpc.metadata, source.jms.metadata, source.syslog, source.file.metadata, processor.transform, source.time, processor.bridge.metadata, sink.s3, source.triggertask, source.gemfire-cq.metadata, source.trigger.metadata, source.jms, source.sftp-dataflow, source.mail, sink.mqtt.metadata, source.mongodb, source.rabbit, sink.router, source.ftp, sink.file, processor.groovy-transform.metadata, processor.counter.metadata, source.tcp-client, processor.scriptable-transform.metadata, processor.pose-estimation, processor.splitter, source.gemfire, sink.redis-pubsub.metadata, source.load-generator, source.loggregator, processor.aggregator, processor.groovy-transform, processor.object-detection, processor.python-jython, sink.throughput, processor.pose-estimation.metadata, sink.ftp, processor.filter.metadata, sink.mqtt, source.trigger, sink.gemfire, processor.header-enricher.metadata, sink.sftp, processor.filter, source.jdbc, source.gemfire-cq, source.twitterstream, sink.rabbit.metadata, sink.websocket.metadata, processor.httpclient.metadata, sink.log.metadata, processor.tasklaunchrequest-transform, processor.tcp-client.metadata, sink.websocket, processor.image-recognition.metadata, source.jdbc.metadata, source.mail.metadata, source.rabbit.metadata, source.tcp-client.metadata, processor.counter, processor.pmml, source.http.metadata, processor.groovy-filter, sink.counter, source.twitterstream.metadata, processor.header-enricher, sink.task-launcher-dataflow.metadata, source.mongodb.metadata, processor.pmml.metadata, sink.router.metadata, sink.mongodb]+```++## <a id='create-stream'></a> Create the Stream Definition++With the app starters imported, you can use three apps (the `http` source, the `split` processor, and the `log` sink) to create a stream that consumes data via an HTTP POST request, processes it by splitting it into words, and outputs the results in logs.

Replace "via" with "through" (part of the general rule to avoid Latin).

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.++## <a id='start-shell'></a> Start the Spring Cloud Data Flow Shell++Before continuing with this topic, you need to download and connect the Spring Cloud Data Flow Shell as described in the [Connecting to SCDF for Kubernetes](connecting-scdf-for-kubernetes.html) topic.++```bash+$ java -jar spring-cloud-dataflow-shell-2.5.1.RELEASE.jar --dataflow.uri=http://data-flow.example.com+  ____                              ____ _                __+ / ___| _ __  _ __(_)_ __   __ _   / ___| | ___  _   _  __| |+ \___ \| '_ \| '__| | '_ \ / _` | | |   | |/ _ \| | | |/ _` |+  ___) | |_) | |  | | | | | (_| | | |___| | (_) | |_| | (_| |+ |____/| .__/|_|  |_|_| |_|\__, |  \____|_|\___/ \__,_|\__,_|+  ____ |_|    _          __|___/                 __________+ |  _ \  __ _| |_ __ _  |  ___| | _____      __  \ \ \ \ \ \+ | | | |/ _` | __/ _` | | |_  | |/ _ \ \ /\ / /   \ \ \ \ \ \+ | |_| | (_| | || (_| | |  _| | | (_) \ V  V /    / / / / / /+ |____/ \__,_|\__\__,_| |_|   |_|\___/ \_/\_/    /_/_/_/_/_/++2.5.1.RELEASE++Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help".+Successfully targeted http://data-flow.example.com+dataflow:>+```++## <a id='import-stream-apps'></a> Import the Spring Cloud Stream Applications++Import the stream app starters using the Data Flow shell’s `app import` command:++Depending on the message broker that you are using, there are two versions of these apps that you can register.+One version is for "RabbitMQ + Docker" and the other version is for "Apache Kafka + Docker".+See the [Register Supported Applications and Tasks](https://docs.spring.io/spring-cloud-dataflow/docs/2.5.1.RELEASE/reference/htmlsingle/#supported-apps-and-tasks) page for more detail.++For RabbitMQ use:

Add a comma after "RabbitMQ".

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.++## <a id='start-shell'></a> Start the Spring Cloud Data Flow Shell++Before continuing with this topic, you need to download and connect the Spring Cloud Data Flow Shell as described in the [Connecting to SCDF for Kubernetes](connecting-scdf-for-kubernetes.html) topic.++```bash+$ java -jar spring-cloud-dataflow-shell-2.5.1.RELEASE.jar --dataflow.uri=http://data-flow.example.com+  ____                              ____ _                __+ / ___| _ __  _ __(_)_ __   __ _   / ___| | ___  _   _  __| |+ \___ \| '_ \| '__| | '_ \ / _` | | |   | |/ _ \| | | |/ _` |+  ___) | |_) | |  | | | | | (_| | | |___| | (_) | |_| | (_| |+ |____/| .__/|_|  |_|_| |_|\__, |  \____|_|\___/ \__,_|\__,_|+  ____ |_|    _          __|___/                 __________+ |  _ \  __ _| |_ __ _  |  ___| | _____      __  \ \ \ \ \ \+ | | | |/ _` | __/ _` | | |_  | |/ _ \ \ /\ / /   \ \ \ \ \ \+ | |_| | (_| | || (_| | |  _| | | (_) \ V  V /    / / / / / /+ |____/ \__,_|\__\__,_| |_|   |_|\___/ \_/\_/    /_/_/_/_/_/++2.5.1.RELEASE++Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help".+Successfully targeted http://data-flow.example.com+dataflow:>+```++## <a id='import-stream-apps'></a> Import the Spring Cloud Stream Applications++Import the stream app starters using the Data Flow shell’s `app import` command:++Depending on the message broker that you are using, there are two versions of these apps that you can register.+One version is for "RabbitMQ + Docker" and the other version is for "Apache Kafka + Docker".

Add a comma after 'Docker"' and before 'and'.

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.++## <a id='start-shell'></a> Start the Spring Cloud Data Flow Shell++Before continuing with this topic, you need to download and connect the Spring Cloud Data Flow Shell as described in the [Connecting to SCDF for Kubernetes](connecting-scdf-for-kubernetes.html) topic.

Add a comma between "Shell" and "as".

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.++## <a id='start-shell'></a> Start the Spring Cloud Data Flow Shell++Before continuing with this topic, you need to download and connect the Spring Cloud Data Flow Shell as described in the [Connecting to SCDF for Kubernetes](connecting-scdf-for-kubernetes.html) topic.++```bash+$ java -jar spring-cloud-dataflow-shell-2.5.1.RELEASE.jar --dataflow.uri=http://data-flow.example.com+  ____                              ____ _                __+ / ___| _ __  _ __(_)_ __   __ _   / ___| | ___  _   _  __| |+ \___ \| '_ \| '__| | '_ \ / _` | | |   | |/ _ \| | | |/ _` |+  ___) | |_) | |  | | | | | (_| | | |___| | (_) | |_| | (_| |+ |____/| .__/|_|  |_|_| |_|\__, |  \____|_|\___/ \__,_|\__,_|+  ____ |_|    _          __|___/                 __________+ |  _ \  __ _| |_ __ _  |  ___| | _____      __  \ \ \ \ \ \+ | | | |/ _` | __/ _` | | |_  | |/ _ \ \ /\ / /   \ \ \ \ \ \+ | |_| | (_| | || (_| | |  _| | | (_) \ V  V /    / / / / / /+ |____/ \__,_|\__\__,_| |_|   |_|\___/ \_/\_/    /_/_/_/_/_/++2.5.1.RELEASE++Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help".+Successfully targeted http://data-flow.example.com+dataflow:>+```++## <a id='import-stream-apps'></a> Import the Spring Cloud Stream Applications++Import the stream app starters using the Data Flow shell’s `app import` command:

Change the colon to a period (because the example isn't the next line).

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Pipeline

+---+title: Creating a Data Pipeline with Spring Cloud Data Flow for Kubernetes+owner: Spring Cloud Data Flow Release Engineering+---++This topic describes how to get started using Spring Cloud Data Flow for Kubernetes.+The examples below show how to quickly create a data pipeline.

Replace "below" with "in this section".

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Editing pass

 This topic contains the release notes for Spring Cloud Data Flow for Kubernetes.  ## <a id="0-1-0"></a>v0.1.0 -**Release Date: MMM DD, 2020**+**Release Date: JUN 03, 2020**

OK. I'll change that. I was staying with the defined format.

Buzzardo

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Editing pass

 This topic describes how to install Spring Cloud Data Flow for Kubernetes.  Before proceeding, review the [Configuring Installation Values](configuring-installation-values.html) topic to ensure that you have configured all of the required or recommended installation resources. -## <a id='install-scdf-create-imagpe-pull-secret'></a> Creating image pull secret for Spring Cloud Data Flow for Kubernetes+## <a id='install-scdf-create-imagpe-pull-secret'></a> Creating an Image Pull Secret for Spring Cloud Data Flow for Kubernetes -Before installing Spring Cloud Data Flow for Kubernetes you must create a Kubernetes Secret that allows the Spring Cloud Data Flow for Kubernetes service accounts to pull images from the Tanzu Network Registry or from the registry where you optionally have relocated the images.-In the namespace where you intend to install, create a Secret as shown below.+Before installing Spring Cloud Data Flow for Kubernetes, you must create a Kubernetes secret that lets the Spring Cloud Data Flow for Kubernetes service accounts pull images from the Tanzu Network Registry or from the registry where you optionally have relocated the images.

No, it shouldn't. The standard is to capitalize product names (and company names and so on) but not the parts and pieces of a program or framework.

Buzzardo

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Editing pass

 Data Flow Options:   --help                                            This message. ``` -We can see from the above that there are a number of authentication settings you can use based on how the secutity is configured for the server.+You can use a number of authentication settings, based on how the security is configured for the server. -For the examples tha folows we assume that the server is not secured and all we need to connect is the URI for the server.+For the examples that follow, we assume that the server is not secured and all we need to connect is the URI for the server. -You can access to the Spring Cloud Data Flow server using the DNS name that is configured for the Ingress resource.+You can access the Spring Cloud Data Flow server by using the DNS name that is configured for the Ingress resource. -To determine the DNS name you can run the following `kubectl` command and use the value shown for `HOSTS`.+To determine the DNS name, you can run the following `kubectl` command and use the value shown for `HOSTS`:  ```bash $ kubectl get ingress scdf-ingress NAME           HOSTS                            ADDRESS          PORTS   AGE scdf-ingress   data-flow.35.232.203.79.xip.io   35.225.206.207   80      23h ``` -To open the dashboard for the host shown above use the following URL in your browser: http://data-flow.35.232.203.79.xip.io/dashboard-To connect the shell to the DNS name listed above use the following command.+To open the dashboard for the host shown above, use the following URL in your browser: `http://data-flow.35.232.203.79.xip.io/dashboard`+To connect the shell to the DNS name listed above, use the following command:

The link doesn't go to an existing site, right? The reader has to build something and then link to it, if I understand correctly. If that's right, these should be treated as code rather than as links.

Buzzardo

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Editing pass

 Forwarding from [::1]:8080 -> 80 Handling connection for 8080 ``` -You can then connect to the dashboard using the URL: http://localhost:8080/dashboard+You can then connect to the dashboard by at `http://localhost:8080/dashboard`

Should be "at". I'll fix it when you finish your review. Thanks.

Buzzardo

comment created time in a month

PR opened pivotal-cf/docs-spring-cloud-dataflow-k8s

Editing pass

Edited for spelling, grammar, usage, and corporate voice.

+159 -164

0 comment

7 changed files

pr created time in a month

create barnchBuzzardo/docs-spring-cloud-dataflow-k8s

branch : editing

created branch time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Update directory and files explanations

 The apps directory contains the following folder structure based on Kustomize na │       └── schemas </pre> -There are two directories for each application that is part of Spring Cloud Data Flow for Kubernetes.-The first directory, data-flow, contains the Data Flow Server and the second directory contains the Skipper server.+There are two directories, one for each application that is part of Spring Cloud Data Flow for Kubernetes.+The first directory, `data-flow`, contains the Data Flow Server and the second directory, `skipper` contains the Skipper server.+There is also an `ingress` directory that contains configuration files for an Ingress resource. Within each application directory there is the following directory structure -* `images` - location for container images that can be downloaded separately.+* `images` - location for container images (only present if "SCDF for Kubernetes installation images" archive was downloaded and extracted)

Change "* images - location " to "* images: The location "

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Update directory and files explanations

 The apps directory contains the following folder structure based on Kustomize na │       └── schemas </pre> -There are two directories for each application that is part of Spring Cloud Data Flow for Kubernetes.-The first directory, data-flow, contains the Data Flow Server and the second directory contains the Skipper server.+There are two directories, one for each application that is part of Spring Cloud Data Flow for Kubernetes.+The first directory, `data-flow`, contains the Data Flow Server and the second directory, `skipper` contains the Skipper server.

Add a comma after "Data Flow Server" and another after skipper.

trisberg

comment created time in a month

Pull request review commentpivotal-cf/docs-spring-cloud-dataflow-k8s

Update directory and files explanations

 The apps directory contains the following folder structure based on Kustomize na │       └── schemas </pre> -There are two directories for each application that is part of Spring Cloud Data Flow for Kubernetes.-The first directory, data-flow, contains the Data Flow Server and the second directory contains the Skipper server.+There are two directories, one for each application that is part of Spring Cloud Data Flow for Kubernetes.+The first directory, `data-flow`, contains the Data Flow Server and the second directory, `skipper` contains the Skipper server.+There is also an `ingress` directory that contains configuration files for an Ingress resource. Within each application directory there is the following directory structure -* `images` - location for container images that can be downloaded separately.+* `images` - location for container images (only present if "SCDF for Kubernetes installation images" archive was downloaded and extracted) * `kustomize` - the directory containing Kubernetes and application configuration files for use with Kustomize.-* `schemas` - Database schemas that you can install manually if you do not want each server to install them upon startup.+* `schemas` - database schemas that you can install manually if you do not want each server to install them upon startup.

Change "* schemas - database" to "* schemas: Database".

trisberg

comment created time in a month

pull request commentspring-guides/tut-spring-boot-kotlin

Update Spring Initializr image and command line support link

Thank you for catching that link. Updating the image is nice, too. :)

rylim

comment created time in a month

push eventspring-guides/tut-spring-boot-kotlin

rylim

commit sha 1704889d71be2717f704d2db5bdc59bc1a43f143

Update image

view details

rylim

commit sha 9a4d7411c9b0cb3b4885cb862bcec1ba5249069b

Fix broken Spring Initializr command line support link

view details

push time in a month

PR merged spring-guides/tut-spring-boot-kotlin

Update Spring Initializr image and command line support link
  • Update screenshot for Spring Initializr project generation
  • Fix broken link for Spring Initializr command line support
+1 -1

0 comment

2 changed files

rylim

pr closed time in a month

PR opened SteeltoeOSS/Documentation

Various changes based on feedback

The Steeltoe team found some formatting troubles. They also didn't need some content (testing.md) and wanted to use the content of messaging.md and quick-tour.md as the overview.

+434 -1131

0 comment

6 changed files

pr created time in a month

create barnchBuzzardo/Documentation

branch : messaging-docs-a

created branch time in a month

pull request commentspring-guides/gs-caching

Update instructions to add model creation header

Thanks for catching that. It's simple enough that I merged it without the CLA being signed.

timani

comment created time in a month

PR merged spring-guides/gs-caching

Update instructions to add model creation header

The first step is to create a model but the first header is "Create a Book Repository" which is the second step

+3 -1

2 comments

1 changed file

timani

pr closed time in a month

push eventspring-guides/gs-caching

Timani Tunduwani

commit sha 227255a9934f14f76a43e8a43e8866563cfb46d9

Update instructions to add model creation header The first step is to create a model but the first header is "Create a Book Repository" which is the second step

view details

push time in a month

pull request commentspring-projects/spring-batch

Made the "Both" option make sense

@benas I have rebased and cleared up all the conflicts.

Buzzardo

comment created time in 2 months

push eventBuzzardo/spring-batch

Mahmoud Ben Hassine

commit sha a5748136d23df4976848ff51c41bcf68a6300488

Fix typos

view details

Mahmoud Ben Hassine

commit sha 2a578ae6639bd3b2087549aef14194fc1e1a6ec3

Remove unused imports

view details

Mahmoud Ben Hassine

commit sha 7ca11f2f998e5e46b35ec7a56fcd0e62b572b667

Add assertion that a serializer was set in the JdbcExecutionContextDao Resolves BATCH-2779

view details

Mahmoud Ben Hassine

commit sha edfbdbb82a61294c2ea7eaabeb1add29ca6cb068

Tweak XmlInputFactory settings

view details

Mahmoud Ben Hassine

commit sha 5f0f7c494bbfab65600110b67a7f805884697996

Fix code example in docs `StepExecution#setTerminateOnly` does not take a boolean parameter.

view details

Mahmoud Ben Hassine

commit sha 90a89c1b02edecb596005efb8ad23e144af490bb

Fix JobOperatorFunctionalTests#testMultipleSimultaneousInstances The `testMultipleSimultaneousInstances` test uses a SimpleAsyncTaskExecutor. This means when a job is submitted, a new thread will be created to run the job. However, there could be a small time interval between JobOperator.startNextInstance(job) and JobOperator.findRunningExecutions(job) where the job execution is created but not started yet. When this happens, the test fails as `findRunningExecutions` does not return the just created (but not started yet) execution. This commit adds a `Thread.sleep` between these two invocations in order to give a chance to the background thread (to be created and) to execute the job.

view details

Sean Sullivan

commit sha ab959717f3454f2f59ac816b15bc21a8b411b8d6

Upgrade jackson to version 2.9.8

view details

Sean Sullivan

commit sha 951bfc970bfdc45688370250f4b34cd26f5270b0

Upgrade activemq to version 5.15.8

view details

Mahmoud Ben Hassine

commit sha 1bfabeb33c53c1ddcb3098e585260150330d5cfe

Fix tests failing on windows The command `ping 1.1.1.1 -n 1 -w 5000` sends only one packet to the remote address and might finish before the configured timeout of 10ms which makes some tests to fail. Moreover, pinging 1.1.1.1 requires the host (which can be the CI build server) to have internet connection. This command can also fail if there is no internet connection which makes some tests (expecting the command to succeed) to fail too. This commit uses the command `ping 127.0.0.1` which does not require an internet connection and which will, by default [1], send 4 packets and wait for a timeout of 4 seconds for each request. This should take more time than the configured timeout of 10ms. Resolves BATCH-2722 [1]: https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/ping

view details

Mahmoud Ben Hassine

commit sha c9f72cd2c1112696365f8023e46db50ff1f83c5b

Minor polish

view details

Mahmoud Ben Hassine

commit sha eba7265a95dc3b264ae901f778f47dc890157af6

Fix tests failing randomly Tests modified in this change set fail randomly due to shared mutable state between multiple threads. This commit ensures that the shared state is correctly synchronised between threads or re-initialized before each test run.

view details

Mahmoud Ben Hassine

commit sha 21c14fc281925827b0269f3e7657a63a38b3967e

Upgrade dependencies

view details

Mahmoud Ben Hassine

commit sha 42596920e64288717f6f24f90647dce3c9fe364a

Remove Travis CI build descriptor Spring Batch CI build runs on Bamboo at Pivotal. There is no need for a third party service to build the project.

view details

Mahmoud Ben Hassine

commit sha 197f32eb210f89c7d41e88a555192f55658c888a

Fix gradle warnings This commit fixes the following warnings: ``` $ ./gradlew clean --warning-mode all > Configure project : The Task.leftShift(Closure) method has been deprecated. This is scheduled to be removed in Gradle 5.0. Please use Task.doLast(Action) instead. at build_1u4zf89udullsb7m3yccjzfxt$_run_closure3.doCall(/spring-batch/build.gradle:187) (Run with --stacktrace to get the full stack trace of this deprecation warning.) Creating a custom task named 'wrapper' has been deprecated. This is scheduled to be removed in Gradle 5.0. You can configure the existing task using the 'wrapper { }' syntax or create your custom task under a different name. at build_1u4zf89udullsb7m3yccjzfxt.run(/spring-batch/build.gradle:921) (Run with --stacktrace to get the full stack trace of this deprecation warning.) BUILD SUCCESSFUL in 3s 10 actionable tasks: 1 executed, 9 up-to-date ``` It also fixes the following warning about annotation processing: ``` Detecting annotation processors on the compile classpath has been deprecated. Gradle 5.0 will ignore annotation processors on the compile classpath. If you did not intend to use annotation processors, you can use the '-proc:none' compiler argument to ignore them. ``` References: * Spring Boot issue #6421 * https://discuss.gradle.org/t/regarding-the-annotation-processors-on-compile-classpath-warning-in-gradle-4-6

view details

Mahmoud Ben Hassine

commit sha 0ec945614b3392b55b37db63b5039fdb681075e9

Upgrade gradle to version 4.10.3

view details

Mahmoud Ben Hassine

commit sha 3ca428471518bd2e6f3ce85917c3379e48b290a3

Upgrade SF to version 5.2.0.BUILD-SNAPSHOT and remove Castor tests Castor support will be removed in SF v5.2. This commit removes CastorMarshallingTests and CastorUnmarshallingTests as they do not compile with SF v5.2.0.BUILD-SNAPSHOT. Resolves BATCH-2787

view details

Mahmoud Ben Hassine

commit sha 8ae64e8e0a00ac9dd4560535cb34e166cabb10c7

Remove `MaxPermSize` option from build as it was removed in Java 8 This commit fixes the warning: ``` Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 ```

view details

Mahmoud Ben Hassine

commit sha 177856ffd754c0b01bd01a44cd1315ddc4e2413f

Remove unused dependencies

view details

Mahmoud Ben Hassine

commit sha fbe345630b9ca0b22bcb4e1f5626adf13cba5a1a

Fix build warnings

view details

Mahmoud Ben Hassine

commit sha 7e9cd07cbbff8e7a2321d1c56e2c7ae8aae38f8e

Fix Javadoc errors

view details

push time in 2 months

push eventBuzzardo/spring-batch

Mahmoud Ben Hassine

commit sha 433a890e9b6f913c1acfe075b948a2f92e32b04a

Update dependencies

view details

Mahmoud Ben Hassine

commit sha ff5578851fb33dedd57c5ccf5dd71e430040570d

Upgrade Spring Data MongoDB to version 3.0.0.BUILD-SNAPSHOT

view details

Michael Minella

commit sha 8e8f9c8101b52b64050d8e92363de2d28319e3b0

Added comparitor for state transitions when using java config Spring Batch orders the transitions as it goes from state to state based on specificity. The XML configuration has always had this functionality. However, when creating the JSR-352 implementation, the mechanism for which this occured was refactored. That occured at about the same time as the java builders were introduced. Because of this crossing of paths, the java configuration option for defining jobs has never correctly sorted the transitions. This PR applys the sorting algorithm to the java configuration, making XML and java configuration behave the same. Resolves #3638

view details

Yanming Zhou

commit sha 6103fe7306a8f7ed632e913de68ea08ce2716a6d

Update schema-appendix.adoc to fix wrong names BATCH_EXECUTION_CONTEXT is obsoleted

view details

BenjaminHetzJelli

commit sha 7f01bf78cfce89e446f57d7f0410ccb14be4646b

Trim Keywords Followed By Whitespace Other Than The Character ' ' Modify `removeKeyWord(...)` such that the keyword is removed regardless of what kind of whitespace follows. This is especially useful for those who read in SQL from a file which has been formatted such that keywords live on their own lines. Added unit tests for trimming whitespace. Resolves #765

view details

Michael Minella

commit sha b83c81e680e38f6504c1dd8600724e2d221a6acb

Updated email addresses from Pivotal to VMware

view details

Michael Minella

commit sha d41cd45f901c2699f3d0f5e6b7fe1931787259ff

Updated step.adoc to correctly configure a split

view details

Mahmoud Ben Hassine

commit sha ad3dcbc96afcef6e6d7dce75c5c208e540adf789

Remove Gitter link from README.md

view details

Chris Schaefer

commit sha d6f90a1d214e17fc6a1f62251d1a90aa3e72b07b

BATCH-2270: Allow ScriptEvaluator to be injectable in the ScriptItemProcessor

view details

Mahmoud Ben Hassine

commit sha 6ab141bb44bb86ee7aacd5d604fd37971a7344ee

Fix incorrect description of AggregateItemReader in "Appendix A" This commit fixes incorrect references to `AggregateItemReader#__$$BEGIN_RECORD$$__` and `AggregateItemReader#__$$END_RECORD$$__` by updating the description of the reader with the one in its Javadoc. It also adds a note that the AggregateItemReader is not part of the standard library of readers provided by Spring Batch but only given as a sample. Issue #1793

view details

Mahmoud Ben Hassine

commit sha f94446878ba7e070c95b3b7e53d1f8491f1126c5

Fix line tokenizer validation in FlatFileItemReaderBuilder Resolves #766

view details

Mahmoud Ben Hassine

commit sha 999ef54328cc06153f924509e23b095384959c02

Add extra check on connection state in AbstractCursorItemReader#doClose Issue #868

view details

Mahmoud Ben Hassine

commit sha 4e07b33a9dbb4d8b50cedcec64ae15a840a2ee46

Add templates for issues and pull requests

view details

jinwook han

commit sha 0ecc0521908a9892e4e18e09cee20dfd20580871

fix documentation of JobExecutionNotRunningException execution -> checked exception

view details

Sanghyuk Jung

commit sha b9fa9c37867829762e61e499c1d16a20b1c9e9c9

Fix constructor of JsonItemReader to call setExecutionContextName() Resolves #3681

view details

Mahmoud Ben Hassine

commit sha d933e4d7df62cb07badcb8263f96d9002300a1b1

Upgrade Gradle to v6.3 to support Java 14 This commit also upgrades Groovy version to v2.5.10 which is required to build correctly against Java 14 Resolves #3685

view details

Mahmoud Ben Hassine

commit sha 6cca32fde0542f7e16c68a051604e63c8bb4cda5

Fix metrics collection in FaultTolerantChunkProcessor Before this commit, metrics were not collected in a fault-tolerant step. This commit updates the FaultTolerantChunkProcessor to collect metrics. For the record, chunk scanning is not covered for two reasons: 1. When scanning a chunk, there is a single item in each write operation, so it would be incorrect to report a metric called "chunk.write" for a single item. We could argue that it is a singleton chunk, but still.. If we want to time scanned (aka individual) items, we need a more fine grained timer called "scanned.item.write" for example. 2. The end result can be confusing and might distort the overall metrics view in case of errors (because of the noisy metrics of additional transactions for individual items). As a reminder, the goal of the "chunk.write" metric is to give an overview of the write operation time of the whole chunk and not to time each item individually (this could be done using an `ItemWriteListener` if needed). Resolves #3664

view details

Jay Bryant

commit sha a7450d51087f4838c0f5121596e4d0b1a4222a5f

Updated spring-doc-resources version Gives us the new look and feel and more readable code listings.

view details

Santiago Molano

commit sha fb21b30d7d68306b5ab400694502ccd7b2eb8269

Fixed FlatFileItemReaderBuilder LineTokenizer validation Fixed validation for the FlatFileItemReaderBuilder where no LineTokenizer had been provided. Resolves: #3688

view details

Mahmoud Ben Hassine

commit sha 78fedb7d41e4701bfdbc8e79cd5d3d58dde3bbd9

Fix typo in Javadoc of KafkaItemReader

view details

push time in 2 months

PR opened SteeltoeOSS/Documentation

First commit for messaging docs

Created a messaging directory under docs and copied in content from the Spring Boot and Spring AMQP projects, converting from Asciidoc to markdown along the way.

+5604 -0

0 comment

7 changed files

pr created time in 2 months

create barnchBuzzardo/Documentation

branch : messaging-docs

created branch time in 2 months

fork Buzzardo/Documentation

The docs used for the Steeltoe Website

fork in 2 months

issue commentspring-io/spring-doc-resources

When we use a code wrapping in the heading it loses its heading meaning and present as plain code with small text

When we mark code in a heading, it needs to be clear that it's both code and a heading. I think the solution would be to lose the box and make it bold but keep the code font.

What do you think, @oodamien?

artembilan

comment created time in 2 months

issue commentspring-projects/spring-ldap

javacc not found

Thanks, Rob.

Buzzardo

comment created time in 2 months

PR merged spring-guides/tut-spring-security-and-angular-js

Bump jquery from 3.3.1 to 3.5.0 in /basic dependencies

Bumps jquery from 3.3.1 to 3.5.0. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/jquery/jquery/commit/7a0a850f3d41c0412609c1d32b1e602d4afe2f4e"><code>7a0a850</code></a> 3.5.0</li> <li><a href="https://github.com/jquery/jquery/commit/8570a08f6689223aa06ca8cc51d488c6d81d44f9"><code>8570a08</code></a> Release: Update AUTHORS.txt</li> <li><a href="https://github.com/jquery/jquery/commit/da3dd85b63c4e3a6a768132c2a83a1a6eec24840"><code>da3dd85</code></a> Ajax: Do not execute scripts for unsuccessful HTTP responses</li> <li><a href="https://github.com/jquery/jquery/commit/065143c2e93512eb0c82d1b344b71d06eb7cf01c"><code>065143c</code></a> Ajax: Overwrite s.contentType with content-type header value, if any</li> <li><a href="https://github.com/jquery/jquery/commit/1a4f10ddc37c34c6dc3a451ee451b5c6cf367399"><code>1a4f10d</code></a> Tests: Blacklist one focusin test in IE</li> <li><a href="https://github.com/jquery/jquery/commit/9e15d6b469556eccfa607c5ecf53b20c84529125"><code>9e15d6b</code></a> Event: Use only one focusin/out handler per matching window & document</li> <li><a href="https://github.com/jquery/jquery/commit/966a70909019aa09632c87c0002c522fa4a1e30e"><code>966a709</code></a> Manipulation: Skip the select wrapper for <option> outside of IE 9</li> <li><a href="https://github.com/jquery/jquery/commit/1d61fd9407e6fbe82fe55cb0b938307aa0791f77"><code>1d61fd9</code></a> Manipulation: Make jQuery.htmlPrefilter an identity function</li> <li><a href="https://github.com/jquery/jquery/commit/04bf577e2f961c9dde85ddadc77f71bc7bc671cc"><code>04bf577</code></a> Selector: Update Sizzle from 2.3.4 to 2.3.5</li> <li><a href="https://github.com/jquery/jquery/commit/7506c9ca62a2f3ef773e19385918c31e9d62d412"><code>7506c9c</code></a> Build: Resolve Travis config warnings</li> <li>Additional commits viewable in <a href="https://github.com/jquery/jquery/compare/3.3.1...3.5.0">compare view</a></li> </ul> </details> <details> <summary>Maintainer changes</summary> <p>This version was pushed to npm by <a href="https://www.npmjs.com/~mgol">mgol</a>, a new releaser for jquery since your current version.</p> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+15 -7

0 comment

2 changed files

dependabot[bot]

pr closed time in 2 months

push eventspring-guides/tut-spring-security-and-angular-js

dependabot[bot]

commit sha f530f5273eb31b07c9108b06738f4ebaf59c0689

Bump jquery from 3.3.1 to 3.5.0 in /basic Bumps [jquery](https://github.com/jquery/jquery) from 3.3.1 to 3.5.0. - [Release notes](https://github.com/jquery/jquery/releases) - [Commits](https://github.com/jquery/jquery/compare/3.3.1...3.5.0) Signed-off-by: dependabot[bot] <support@github.com>

view details

push time in 2 months

push eventspring-guides/tut-spring-security-and-angular-js

dependabot[bot]

commit sha 3e1ca718a1a89cf1864a96d929c764fe224fe783

Bump jquery from 3.2.1 to 3.5.0 in /oauth2-logout/ui Bumps [jquery](https://github.com/jquery/jquery) from 3.2.1 to 3.5.0. - [Release notes](https://github.com/jquery/jquery/releases) - [Commits](https://github.com/jquery/jquery/compare/3.2.1...3.5.0) Signed-off-by: dependabot[bot] <support@github.com>

view details

push time in 2 months

PR merged spring-guides/tut-spring-security-and-angular-js

Bump jquery from 3.2.1 to 3.5.0 in /oauth2-logout/ui dependencies

Bumps jquery from 3.2.1 to 3.5.0. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/jquery/jquery/commit/7a0a850f3d41c0412609c1d32b1e602d4afe2f4e"><code>7a0a850</code></a> 3.5.0</li> <li><a href="https://github.com/jquery/jquery/commit/8570a08f6689223aa06ca8cc51d488c6d81d44f9"><code>8570a08</code></a> Release: Update AUTHORS.txt</li> <li><a href="https://github.com/jquery/jquery/commit/da3dd85b63c4e3a6a768132c2a83a1a6eec24840"><code>da3dd85</code></a> Ajax: Do not execute scripts for unsuccessful HTTP responses</li> <li><a href="https://github.com/jquery/jquery/commit/065143c2e93512eb0c82d1b344b71d06eb7cf01c"><code>065143c</code></a> Ajax: Overwrite s.contentType with content-type header value, if any</li> <li><a href="https://github.com/jquery/jquery/commit/1a4f10ddc37c34c6dc3a451ee451b5c6cf367399"><code>1a4f10d</code></a> Tests: Blacklist one focusin test in IE</li> <li><a href="https://github.com/jquery/jquery/commit/9e15d6b469556eccfa607c5ecf53b20c84529125"><code>9e15d6b</code></a> Event: Use only one focusin/out handler per matching window & document</li> <li><a href="https://github.com/jquery/jquery/commit/966a70909019aa09632c87c0002c522fa4a1e30e"><code>966a709</code></a> Manipulation: Skip the select wrapper for <option> outside of IE 9</li> <li><a href="https://github.com/jquery/jquery/commit/1d61fd9407e6fbe82fe55cb0b938307aa0791f77"><code>1d61fd9</code></a> Manipulation: Make jQuery.htmlPrefilter an identity function</li> <li><a href="https://github.com/jquery/jquery/commit/04bf577e2f961c9dde85ddadc77f71bc7bc671cc"><code>04bf577</code></a> Selector: Update Sizzle from 2.3.4 to 2.3.5</li> <li><a href="https://github.com/jquery/jquery/commit/7506c9ca62a2f3ef773e19385918c31e9d62d412"><code>7506c9c</code></a> Build: Resolve Travis config warnings</li> <li>Additional commits viewable in <a href="https://github.com/jquery/jquery/compare/3.2.1...3.5.0">compare view</a></li> </ul> </details> <details> <summary>Maintainer changes</summary> <p>This version was pushed to npm by <a href="https://www.npmjs.com/~mgol">mgol</a>, a new releaser for jquery since your current version.</p> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+18 -8

0 comment

2 changed files

dependabot[bot]

pr closed time in 2 months

push eventBuzzardo/spring-session

Eleftheria Stein

commit sha 997ff56c6317ee7a75ca9a7e36af9458b0d29af3

Update gitignore

view details

Eleftheria Stein

commit sha 29af9d3a4d7e9db750189c47bfa1224df50a8fa9

WebFlux custom cookie sample Resolves gh-1620

view details

Eleftheria Stein

commit sha 5375f51bca58f4db69483b4d67c8be2cac0cd3fc

Fix broken links in guides Resolves gh-1621

view details

Eleftheria Stein

commit sha 49375a28fae80edad162f7185bdb73dff771453f

Add guide for customizing cookie in WebFlux Resolves gh-1614

view details

push time in 2 months

issue openedspring-projects/spring-ldap

The Gradle distribution tasks don't work

The docsZip and distZip tasks break with the following error:

Could not read /Users/j/projects/spring-ldap/core/build/javacc/javacc-5.0.tar.gz.

Not in GZIP format

created time in 2 months

pull request commentspring-projects/spring-kafka

Upgrade Asciidoctor

The asciidoctor task is creating a bunch of false positives for invalid references. Fortunately, the PDF generator generates correct invalid reference messages, so I was able to find and fix two invalid references (both in the same file).

I have filed an issue with the asciidoctor-gradle-plugin project: https://github.com/asciidoctor/asciidoctor-gradle-plugin/issues/550

Buzzardo

comment created time in 2 months

issue openedasciidoctor/asciidoctor-gradle-plugin

False positives for "possible invalid reference"

When I run the asciidoctor task, I get a bunch of false positives, as follows:

uri:classloader:/gems/asciidoctor-2.0.10/lib/asciidoctor/converter/html5.rb convert_document INFO: possible invalid reference: junit

junit is a valid reference, defined as [[junit]] in an included file. The link in the finished document works.

I noticed that, when I left id 'org.asciidoctor.jvm.gems' version '3.1.0' out of the plugins section of the build file, the PDF converter didn't provide any invalid reference warnings. When I do include it, the PDF converter correctly identifies invalid references and does not create the false positives.

I thought you might want to investigate the differences between the reference checker in 'org.asciidoctor.jvm.gems' version '3.1.0' and the one in 'org.asciidoctor.jvm.convert' version '3.1.0'.

You can find the build file in this commit: https://github.com/spring-projects/spring-kafka/pull/1469/commits/2485c65a32ec71ebe3e7a67b85de67d662e26583

created time in 2 months

PR opened spring-projects/spring-kafka

Upgrade Asciidoctor

Upgrade Asciidoctor to the latest version, to eliminate technical debt.

I also fixed an invalid reference.

+30 -18

0 comment

2 changed files

pr created time in 2 months

create barnchBuzzardo/spring-kafka

branch : upgrade-asciidoctor

created branch time in 2 months

issue commentspring-guides/tut-react-and-spring-data-rest

Can not run the project on Spring Source Tool Suite.

Do you have Cygwin? If so, does it run correctly if you try it on Cygwin?

vosybac

comment created time in 2 months

PR opened spring-cloud/spring-cloud-app-broker

Upgrade Asciidoctor

Upgrade Asciidoctor to the latest version (3.1.0), to reduce technical debt.

+32 -24

0 comment

2 changed files

pr created time in 2 months

create barnchBuzzardo/spring-cloud-app-broker

branch : upgrade-asciidoctor

created branch time in 2 months

PR opened spring-projects/spring-batch

Upgrade versions of Asciidoctor

Upgrade the versions of Asciidoctor from 1.5/1.6 to 3.1.0, to reduce technical debt.

Thank you for taking time to contribute this pull request! You might have already read the contributor guide, but as a reminder, please make sure to:

  • Sign the contributor license agreement
  • Rebase your changes on the latest master branch and squash your commits
  • Add/Update unit tests as needed
  • Run a build and make sure all tests pass prior to submission

For more details, please check the contributor guide. Thank you upfront!

+24 -24

0 comment

1 changed file

pr created time in 2 months

more