= aws_kinesis :type: output :status: stable :categories: ["Services","AWS"] //// THIS FILE IS AUTOGENERATED! To make changes, edit the corresponding source file under: https://github.com/redpanda-data/connect/tree/main/internal/impl/. And: https://github.com/redpanda-data/connect/tree/main/cmd/tools/docs_gen/templates/plugin.adoc.tmpl //// // © 2024 Redpanda Data Inc. component_type_dropdown::[] Sends messages to a Kinesis stream. Introduced in version 3.36.0. [tabs] ====== Common:: + -- ```yml # Common config fields, showing default values output: label: "" aws_kinesis: stream: foo # No default (required) partition_key: "" # No default (required) max_in_flight: 64 batching: count: 0 byte_size: 0 period: "" check: "" ``` -- Advanced:: + -- ```yml # All config fields, showing default values output: label: "" aws_kinesis: stream: foo # No default (required) partition_key: "" # No default (required) hash_key: "" # No default (optional) max_in_flight: 64 batching: count: 0 byte_size: 0 period: "" check: "" processors: [] # No default (optional) region: "" endpoint: "" credentials: profile: "" id: "" secret: "" token: "" from_ec2_role: false role: "" role_external_id: "" max_retries: 0 backoff: initial_interval: 1s max_interval: 5s max_elapsed_time: 30s ``` -- ====== Both the `partition_key`(required) and `hash_key` (optional) fields can be dynamically set using function interpolations described xref:configuration:interpolation.adoc#bloblang-queries[here]. When sending batched messages the interpolations are performed per message part. == Credentials By default Benthos will use a shared credentials file when connecting to AWS services. It's also possible to set them explicitly at the component level, allowing you to transfer data across accounts. You can find out more in xref:guides:cloud/aws.adoc[]. == Performance This output benefits from sending multiple messages in flight in parallel for improved performance. You can tune the max number of in flight messages (or message batches) with the field `max_in_flight`. This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level. You can find out more xref:configuration:batching.adoc[in this doc]. == Fields === `stream` The stream to publish messages to. Streams can either be specified by their name or full ARN. *Type*: `string` ```yml # Examples stream: foo stream: arn:aws:kinesis:*:111122223333:stream/my-stream ``` === `partition_key` A required key for partitioning messages. This field supports xref:configuration:interpolation.adoc#bloblang-queries[interpolation functions]. *Type*: `string` === `hash_key` A optional hash key for partitioning messages. This field supports xref:configuration:interpolation.adoc#bloblang-queries[interpolation functions]. *Type*: `string` === `max_in_flight` The maximum number of parallel message batches to have in flight at any given time. *Type*: `int` *Default*: `64` === `batching` Allows you to configure a xref:configuration:batching.adoc[batching policy]. *Type*: `object` ```yml # Examples batching: byte_size: 5000 count: 0 period: 1s batching: count: 10 period: 1s batching: check: this.contains("END BATCH") count: 0 period: 1m ``` === `batching.count` A number of messages at which the batch should be flushed. If `0` disables count based batching. *Type*: `int` *Default*: `0` === `batching.byte_size` An amount of bytes at which the batch should be flushed. If `0` disables size based batching. *Type*: `int` *Default*: `0` === `batching.period` A period in which an incomplete batch should be flushed regardless of its size. *Type*: `string` *Default*: `""` ```yml # Examples period: 1s period: 1m period: 500ms ``` === `batching.check` A xref:guides:bloblang/about.adoc[Bloblang query] that should return a boolean value indicating whether a message should end a batch. *Type*: `string` *Default*: `""` ```yml # Examples check: this.type == "end_of_transaction" ``` === `batching.processors` A list of xref:components:processors/about.adoc[processors] to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op. *Type*: `array` ```yml # Examples processors: - archive: format: concatenate processors: - archive: format: lines processors: - archive: format: json_array ``` === `region` The AWS region to target. *Type*: `string` *Default*: `""` === `endpoint` Allows you to specify a custom endpoint for the AWS API. *Type*: `string` *Default*: `""` === `credentials` Optional manual configuration of AWS credentials to use. More information can be found in xref:guides:cloud/aws.adoc[]. *Type*: `object` === `credentials.profile` A profile from `~/.aws/credentials` to use. *Type*: `string` *Default*: `""` === `credentials.id` The ID of credentials to use. *Type*: `string` *Default*: `""` === `credentials.secret` The secret for the credentials being used. [CAUTION] ==== This field contains sensitive information that usually shouldn't be added to a config directly, read our xref:configuration:secrets.adoc[secrets page for more info]. ==== *Type*: `string` *Default*: `""` === `credentials.token` The token for the credentials being used, required when using short term credentials. *Type*: `string` *Default*: `""` === `credentials.from_ec2_role` Use the credentials of a host EC2 machine configured to assume https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html[an IAM role associated with the instance^]. *Type*: `bool` *Default*: `false` Requires version 4.2.0 or newer === `credentials.role` A role ARN to assume. *Type*: `string` *Default*: `""` === `credentials.role_external_id` An external ID to provide when assuming a role. *Type*: `string` *Default*: `""` === `max_retries` The maximum number of retries before giving up on the request. If set to zero there is no discrete limit. *Type*: `int` *Default*: `0` === `backoff` Control time intervals between retry attempts. *Type*: `object` === `backoff.initial_interval` The initial period to wait between retry attempts. *Type*: `string` *Default*: `"1s"` === `backoff.max_interval` The maximum period to wait between retry attempts. *Type*: `string` *Default*: `"5s"` === `backoff.max_elapsed_time` The maximum period to wait before retry attempts are abandoned. If zero then no limit is used. *Type*: `string` *Default*: `"30s"`