= azure_table_storage :type: output :status: beta :categories: ["Services","Azure"] //// THIS FILE IS AUTOGENERATED! To make changes, edit the corresponding source file under: https://github.com/redpanda-data/connect/tree/main/internal/impl/. And: https://github.com/redpanda-data/connect/tree/main/cmd/tools/docs_gen/templates/plugin.adoc.tmpl //// // © 2024 Redpanda Data Inc. component_type_dropdown::[] Stores messages in an Azure Table Storage table. Introduced in version 3.36.0. [tabs] ====== Common:: + -- ```yml # Common config fields, showing default values output: label: "" azure_table_storage: storage_account: "" storage_access_key: "" storage_connection_string: "" storage_sas_token: "" table_name: ${! meta("kafka_topic") } # No default (required) partition_key: "" row_key: "" properties: {} max_in_flight: 64 batching: count: 0 byte_size: 0 period: "" check: "" ``` -- Advanced:: + -- ```yml # All config fields, showing default values output: label: "" azure_table_storage: storage_account: "" storage_access_key: "" storage_connection_string: "" storage_sas_token: "" table_name: ${! meta("kafka_topic") } # No default (required) partition_key: "" row_key: "" properties: {} transaction_type: INSERT max_in_flight: 64 timeout: 5s batching: count: 0 byte_size: 0 period: "" check: "" processors: [] # No default (optional) ``` -- ====== Only one authentication method is required, `storage_connection_string` or `storage_account` and `storage_access_key`. If both are set then the `storage_connection_string` is given priority. In order to set the `table_name`, `partition_key` and `row_key` you can use function interpolations described xref:configuration:interpolation.adoc#bloblang-queries[here], which are calculated per message of a batch. If the `properties` are not set in the config, all the `json` fields are marshalled and stored in the table, which will be created if it does not exist. The `object` and `array` fields are marshaled as strings. e.g.: The JSON message: ```json { "foo": 55, "bar": { "baz": "a", "bez": "b" }, "diz": ["a", "b"] } ``` Will store in the table the following properties: ```yml foo: '55' bar: '{ "baz": "a", "bez": "b" }' diz: '["a", "b"]' ``` It's also possible to use function interpolations to get or transform the properties values, e.g.: ```yml properties: device: '${! json("device") }' timestamp: '${! json("timestamp") }' ``` == Performance This output benefits from sending multiple messages in flight in parallel for improved performance. You can tune the max number of in flight messages (or message batches) with the field `max_in_flight`. This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level. You can find out more xref:configuration:batching.adoc[in this doc]. == Fields === `storage_account` The storage account to access. This field is ignored if `storage_connection_string` is set. *Type*: `string` *Default*: `""` === `storage_access_key` The storage account access key. This field is ignored if `storage_connection_string` is set. *Type*: `string` *Default*: `""` === `storage_connection_string` A storage account connection string. This field is required if `storage_account` and `storage_access_key` / `storage_sas_token` are not set. *Type*: `string` *Default*: `""` === `storage_sas_token` The storage account SAS token. This field is ignored if `storage_connection_string` or `storage_access_key` are set. *Type*: `string` *Default*: `""` === `table_name` The table to store messages into. This field supports xref:configuration:interpolation.adoc#bloblang-queries[interpolation functions]. *Type*: `string` ```yml # Examples table_name: ${! meta("kafka_topic") } table_name: ${! json("table") } ``` === `partition_key` The partition key. This field supports xref:configuration:interpolation.adoc#bloblang-queries[interpolation functions]. *Type*: `string` *Default*: `""` ```yml # Examples partition_key: ${! json("date") } ``` === `row_key` The row key. This field supports xref:configuration:interpolation.adoc#bloblang-queries[interpolation functions]. *Type*: `string` *Default*: `""` ```yml # Examples row_key: ${! json("device")}-${!uuid_v4() } ``` === `properties` A map of properties to store into the table. This field supports xref:configuration:interpolation.adoc#bloblang-queries[interpolation functions]. *Type*: `object` *Default*: `{}` === `transaction_type` Type of transaction operation. This field supports xref:configuration:interpolation.adoc#bloblang-queries[interpolation functions]. *Type*: `string` *Default*: `"INSERT"` Options: `INSERT` , `INSERT_MERGE` , `INSERT_REPLACE` , `UPDATE_MERGE` , `UPDATE_REPLACE` , `DELETE` . ```yml # Examples transaction_type: ${! json("operation") } transaction_type: ${! meta("operation") } transaction_type: INSERT ``` === `max_in_flight` The maximum number of parallel message batches to have in flight at any given time. *Type*: `int` *Default*: `64` === `timeout` The maximum period to wait on an upload before abandoning it and reattempting. *Type*: `string` *Default*: `"5s"` === `batching` Allows you to configure a xref:configuration:batching.adoc[batching policy]. *Type*: `object` ```yml # Examples batching: byte_size: 5000 count: 0 period: 1s batching: count: 10 period: 1s batching: check: this.contains("END BATCH") count: 0 period: 1m ``` === `batching.count` A number of messages at which the batch should be flushed. If `0` disables count based batching. *Type*: `int` *Default*: `0` === `batching.byte_size` An amount of bytes at which the batch should be flushed. If `0` disables size based batching. *Type*: `int` *Default*: `0` === `batching.period` A period in which an incomplete batch should be flushed regardless of its size. *Type*: `string` *Default*: `""` ```yml # Examples period: 1s period: 1m period: 500ms ``` === `batching.check` A xref:guides:bloblang/about.adoc[Bloblang query] that should return a boolean value indicating whether a message should end a batch. *Type*: `string` *Default*: `""` ```yml # Examples check: this.type == "end_of_transaction" ``` === `batching.processors` A list of xref:components:processors/about.adoc[processors] to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op. *Type*: `array` ```yml # Examples processors: - archive: format: concatenate processors: - archive: format: lines processors: - archive: format: json_array ```