Configuration
Wrangler optionally uses a configuration file to customize the development and deployment setup for a Worker.
It is best practice to treat Wrangler's configuration file as the source of truth for configuring a Worker.
{ "name": "my-worker", "main": "src/index.js", "compatibility_date": "2022-07-12", "workers_dev": false, "route": { "pattern": "example.org/*", "zone_name": "example.org" }, "kv_namespaces": [ { "binding": "<MY_NAMESPACE>", "id": "<KV_ID>" } ], "env": { "staging": { "name": "my-worker-staging", "route": { "pattern": "staging.example.org/*", "zone_name": "example.org" }, "kv_namespaces": [ { "binding": "<MY_NAMESPACE>", "id": "<STAGING_KV_ID>" } ] } }}# Top-level configurationname = "my-worker"main = "src/index.js"compatibility_date = "2022-07-12"
workers_dev = falseroute = { pattern = "example.org/*", zone_name = "example.org" }
kv_namespaces = [ { binding = "<MY_NAMESPACE>", id = "<KV_ID>" }]
[env.staging]name = "my-worker-staging"route = { pattern = "staging.example.org/*", zone_name = "example.org" }
kv_namespaces = [ { binding = "<MY_NAMESPACE>", id = "<STAGING_KV_ID>" }]You can define different configurations for a Worker using Wrangler environments. There is a default (top-level) environment and you can create named environments that provide environment-specific configuration.
These are defined under [env.<name>] keys, such as [env.staging] which you can then preview or deploy with the -e / --env flag in the wrangler commands like npx wrangler deploy --env staging.
The majority of keys are inheritable, meaning that top-level configuration can be used in environments. Bindings, such as vars or kv_namespaces, are not inheritable and need to be defined explicitly.
Further, there are a few keys that can only appear at the top-level.
Top-level keys apply to the Worker as a whole (and therefore all environments). They cannot be defined within named environments.
-
keep_varsboolean optional- Whether Wrangler should keep variables configured in the dashboard on deploy. Refer to source of truth.
-
migrationsobject[] optional- When making changes to your Durable Object classes, you must perform a migration. Refer to Durable Object migrations.
-
send_metricsboolean optional- Whether Wrangler should send usage data to Cloudflare for this project. Defaults to
true. You can learn more about this in our data policy ↗.
- Whether Wrangler should send usage data to Cloudflare for this project. Defaults to
-
siteobject optional deprecated- See the Workers Sites section below for more information. Cloudflare Pages and Workers Assets is preferred over this approach.
- This is not supported by the Cloudflare Vite plugin.
Inheritable keys are configurable at the top-level, and can be inherited (or overridden) by environment-specific configuration.
-
namestring required- The name of your Worker. Alphanumeric characters (
a,b,c, etc.) and dashes (-) only. Do not use underscores (_).
- The name of your Worker. Alphanumeric characters (
-
mainstring required- The path to the entrypoint of your Worker that will be executed. For example:
./src/index.ts.
- The path to the entrypoint of your Worker that will be executed. For example:
-
compatibility_datestring required- A date in the form
yyyy-mm-dd, which will be used to determine which version of the Workers runtime is used. Refer to Compatibility dates.
- A date in the form
-
account_idstring optional- This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the zone/route you provide, if you provide one. It can also be specified through the
CLOUDFLARE_ACCOUNT_IDenvironment variable.
- This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the zone/route you provide, if you provide one. It can also be specified through the
-
compatibility_flagsstring[] optional- A list of flags that enable features from upcoming features of the Workers runtime, usually used together with
compatibility_date. Refer to compatibility dates.
- A list of flags that enable features from upcoming features of the Workers runtime, usually used together with
-
workers_devboolean optional- Enables use of
*.workers.devsubdomain to deploy your Worker. If you have a Worker that is only forscheduledevents, you can set this tofalse. Defaults totrue. Refer to types of routes.
- Enables use of
-
preview_urlsboolean optional- Enables use of Preview URLs to test your Worker. Defaults to
true. Refer to Preview URLs.
- Enables use of Preview URLs to test your Worker. Defaults to
-
routeRoute optional- A route that your Worker should be deployed to. Only one of
routesorrouteis required. Refer to types of routes.
- A route that your Worker should be deployed to. Only one of
-
routesRoute[] optional- An array of routes that your Worker should be deployed to. Only one of
routesorrouteis required. Refer to types of routes.
- An array of routes that your Worker should be deployed to. Only one of
-
tsconfigstring optional- Path to a custom
tsconfig. - Not applicable if you're using the Cloudflare Vite plugin.
- Path to a custom
-
triggersobject optional- Cron definitions to trigger a Worker's
scheduledfunction. Refer to triggers.
- Cron definitions to trigger a Worker's
-
rulesRule optional- An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use
Text,DataandCompiledWasmmodules, or when you wish to have a.jsfile be treated as anESModuleinstead ofCommonJS. - Not applicable if you're using the Cloudflare Vite plugin.
- An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use
-
buildBuild optional- Configures a custom build step to be run by Wrangler when building your Worker. Refer to Custom builds.
- Not applicable if you're using the Cloudflare Vite plugin.
-
no_bundleboolean optional- Skip internal build steps and directly deploy your Worker script. You must have a plain JavaScript Worker with no dependencies.
- Not applicable if you're using the Cloudflare Vite plugin.
-
find_additional_modulesboolean optional- If true then Wrangler will traverse the file tree below
base_dir. Any files that matchruleswill be included in the deployed Worker. Defaults to true ifno_bundleis true, otherwise false. Can only be used with Module format Workers (not Service Worker format). - Not applicable if you're using the Cloudflare Vite plugin.
- If true then Wrangler will traverse the file tree below
-
base_dirstring optional- The directory in which module "rules" should be evaluated when including additional files (via
find_additional_modules) into a Worker deployment. Defaults to the directory containing themainentry point of the Worker if not specified. - Not applicable if you're using the Cloudflare Vite plugin.
- The directory in which module "rules" should be evaluated when including additional files (via
-
preserve_file_namesboolean optional- Determines whether Wrangler will preserve the file names of additional modules bundled with the Worker.
The default is to prepend filenames with a content hash.
For example,
34de60b44167af5c5a709e62a4e20c4f18c9e3b6-favicon.ico. - Not applicable if you're using the Cloudflare Vite plugin.
- Determines whether Wrangler will preserve the file names of additional modules bundled with the Worker.
The default is to prepend filenames with a content hash.
For example,
-
minifyboolean optional- Minify the Worker script before uploading.
- If you're using the Cloudflare Vite plugin,
minifyis replaced by Vite'sbuild.minify↗.
-
keep_namesboolean optional- Wrangler uses esbuild to process the Worker code for development and deployment. This option allows
you to specify whether esbuild should apply its keepNames ↗ logic to the code or not. Defaults to
true.
- Wrangler uses esbuild to process the Worker code for development and deployment. This option allows
you to specify whether esbuild should apply its keepNames ↗ logic to the code or not. Defaults to
-
logpushboolean optional- Enables Workers Trace Events Logpush for a Worker. Any scripts with this property will automatically get picked up by the Workers Logpush job configured for your account. Defaults to
false. Refer to Workers Logpush.
- Enables Workers Trace Events Logpush for a Worker. Any scripts with this property will automatically get picked up by the Workers Logpush job configured for your account. Defaults to
-
limitsLimits optional- Configures limits to be imposed on execution at runtime. Refer to Limits.
-
observabilityobject optional- Configures automatic observability settings for telemetry data emitted from your Worker. Refer to Observability.
-
assetsAssets optional- Configures static assets that will be served. Refer to Assets for more details.
-
migrationsobject optional- Maps a Durable Object from a class name to a runtime state. This communicates changes to the Durable Object (creation / deletion / rename / transfer) to the Workers runtime and provides the runtime with instructions on how to deal with those changes. Refer to Durable Objects migrations.
Non-inheritable keys are configurable at the top-level, but cannot be inherited by environments and must be specified for each environment.
-
defineRecord<string, string> optional- A map of values to substitute when deploying your Worker.
- If you're using the Cloudflare Vite plugin,
defineis replaced by Vite'sdefine↗.
-
varsobject optional- A map of environment variables to set when deploying your Worker. Refer to Environment variables.
-
durable_objectsobject optional- A list of Durable Objects that your Worker should be bound to. Refer to Durable Objects.
-
kv_namespacesobject optional- A list of KV namespaces that your Worker should be bound to. Refer to KV namespaces.
-
r2_bucketsobject optional- A list of R2 buckets that your Worker should be bound to. Refer to R2 buckets.
-
vectorizeobject optional- A list of Vectorize indexes that your Worker should be bound to. Refer to Vectorize indexes.
-
servicesobject optional- A list of service bindings that your Worker should be bound to. Refer to service bindings.
-
tail_consumersobject optional- A list of the Tail Workers your Worker sends data to. Refer to Tail Workers.
There are three types of routes: Custom Domains, routes, and workers.dev.
Custom Domains allow you to connect your Worker to a domain or subdomain, without having to make changes to your DNS settings or perform any certificate management.
-
patternstring required- The pattern that your Worker should be run on, for example,
"example.com".
- The pattern that your Worker should be run on, for example,
-
custom_domainboolean optional- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
false.
- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
Example:
{ "routes": [ { "pattern": "shop.example.com", "custom_domain": true } ]}routes = [ { pattern = "shop.example.com", custom_domain = true }]Routes allow users to map a URL pattern to a Worker. A route can be configured as a zone ID route, a zone name route, or a simple route.
-
patternstring required- The pattern that your Worker can be run on, for example,
"example.com/*".
- The pattern that your Worker can be run on, for example,
-
zone_idstring required- The ID of the zone that your
patternis associated with. Refer to Find zone and account IDs.
- The ID of the zone that your
Example:
{ "routes": [ { "pattern": "subdomain.example.com/*", "zone_id": "<YOUR_ZONE_ID>" } ]}routes = [ { pattern = "subdomain.example.com/*", zone_id = "<YOUR_ZONE_ID>" }]-
patternstring required- The pattern that your Worker should be run on, for example,
"example.com/*".
- The pattern that your Worker should be run on, for example,
-
zone_namestring required- The name of the zone that your
patternis associated with. If you are using API tokens, this will require theAccountscope.
- The name of the zone that your
Example:
{ "routes": [ { "pattern": "subdomain.example.com/*", "zone_name": "example.com" } ]}routes = [ { pattern = "subdomain.example.com/*", zone_name = "example.com" }]This is a simple route that only requires a pattern.
Example:
{ "route": "example.com/*"}route = "example.com/*"Cloudflare Workers accounts come with a workers.dev subdomain that is configurable in the Cloudflare dashboard.
-
workers_devboolean optional- Whether the Worker runs on a custom
workers.devaccount subdomain. Defaults totrue.
- Whether the Worker runs on a custom
{ "workers_dev": false}workers_dev = falseTriggers allow you to define the cron expression to invoke your Worker's scheduled function. Refer to Supported cron expressions.
-
cronsstring[] required- An array of
cronexpressions. - To disable a Cron Trigger, set
crons = []. Commenting out thecronskey will not disable a Cron Trigger.
- An array of
Example:
{ "triggers": { "crons": [ "* * * * *" ] }}[triggers]crons = ["* * * * *"]The Observability setting allows you to automatically ingest, store, filter, and analyze logging data emitted from Cloudflare Workers directly from your Cloudflare Worker's dashboard.
-
enabledboolean required- When set to
trueon a Worker, logs for the Worker are persisted. Defaults totruefor all new Workers.
- When set to
-
head_sampling_ratenumber optional- A number between 0 and 1, where 0 indicates zero out of one hundred requests are logged, and 1 indicates every request is logged. If
head_sampling_rateis unspecified, it is configured to a default value of 1 (100%). Read more about head-based sampling.
- A number between 0 and 1, where 0 indicates zero out of one hundred requests are logged, and 1 indicates every request is logged. If
Example:
{ "observability": { "enabled": true, "head_sampling_rate": 0.1 }}[observability]enabled = truehead_sampling_rate = 0.1 # 10% of requests are loggedYou can configure a custom build step that will be run before your Worker is deployed. Refer to Custom builds.
-
commandstring optional- The command used to build your Worker. On Linux and macOS, the command is executed in the
shshell and thecmdshell for Windows. The&&and||shell operators may be used.
- The command used to build your Worker. On Linux and macOS, the command is executed in the
-
cwdstring optional- The directory in which the command is executed.
-
watch_dirstring | string[] optional- The directory to watch for changes while using
wrangler dev. Defaults to the current working directory.
- The directory to watch for changes while using
Example:
{ "build": { "command": "npm run build", "cwd": "build_cwd", "watch_dir": "build_watch_dir" }}[build]command = "npm run build"cwd = "build_cwd"watch_dir = "build_watch_dir"You can impose limits on your Worker's behavior at runtime. Limits are only supported for the Standard Usage Model. Limits are only enforced when deployed to Cloudflare's network, not in local development. The CPU limit can be set to a maximum of 300,000 milliseconds (5 minutes).
Each isolate has some built-in flexibility to allow for cases where your Worker infrequently runs over the configured limit. If your Worker starts hitting the limit consistently, its execution will be terminated according to the limit configured.
-
cpu_msnumber optional- The maximum CPU time allowed per invocation, in milliseconds.
Example:
{ "limits": { "cpu_ms": 100 }}[limits]cpu_ms = 100The Workers Browser Rendering API allows developers to programmatically control and interact with a headless browser instance and create automation flows for their applications and products.
A browser binding will provide your Worker with an authenticated endpoint to interact with a dedicated Chromium browser instance.
-
bindingstring required- The binding name used to refer to the browser binding. The value (string) you set will be used to reference this headless browser in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
binding = "HEAD_LESS"orbinding = "simulatedBrowser"would both be valid names for the binding.
- The binding name used to refer to the browser binding. The value (string) you set will be used to reference this headless browser in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
Example:
{ "browser": { "binding": "<BINDING_NAME>" }}[browser]binding = "<BINDING_NAME>"D1 is Cloudflare's serverless SQL database. A Worker can query a D1 database (or databases) by creating a binding to each database for D1 Workers Binding API.
To bind D1 databases to your Worker, assign an array of the below object to the [[d1_databases]] key.
-
bindingstring required- The binding name used to refer to the D1 database. The value (string) you set will be used to reference this database in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
binding = "MY_DB"orbinding = "productionDB"would both be valid names for the binding.
- The binding name used to refer to the D1 database. The value (string) you set will be used to reference this database in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
-
database_namestring required- The name of the database. This is a human-readable name that allows you to distinguish between different databases, and is set when you first create the database.
-
database_idstring required- The ID of the database. The database ID is available when you first use
wrangler d1 createor when you callwrangler d1 list, and uniquely identifies your database.
- The ID of the database. The database ID is available when you first use
-
preview_database_idstring optional- The preview ID of this D1 database. If provided,
wrangler devuses this ID. Otherwise, it usesdatabase_id. This option is required when usingwrangler dev --remote.
- The preview ID of this D1 database. If provided,
-
migrations_dirstring optional- The migration directory containing the migration files. By default,
wrangler d1 migrations createcreates a folder namedmigrations. You can usemigrations_dirto specify a different folder containing the migration files (for example, if you have a mono-repo setup, and want to use a single D1 instance across your apps/packages). - For more information, refer to D1 Wrangler
migrationscommands and D1 migrations.
- The migration directory containing the migration files. By default,
Example:
{ "d1_databases": [ { "binding": "<BINDING_NAME>", "database_name": "<DATABASE_NAME>", "database_id": "<DATABASE_ID>" } ]}[[d1_databases]]binding = "<BINDING_NAME>"database_name = "<DATABASE_NAME>"database_id = "<DATABASE_ID>"Dispatch namespace bindings allow for communication between a dynamic dispatch Worker and a dispatch namespace. Dispatch namespace bindings are used in Workers for Platforms. Workers for Platforms helps you deploy serverless functions programmatically on behalf of your customers.
-
bindingstring required- The binding name. The value (string) you set will be used to reference this database in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
binding = "MY_NAMESPACE"orbinding = "productionNamespace"would both be valid names for the binding.
- The binding name. The value (string) you set will be used to reference this database in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
-
namespacestring required- The name of the dispatch namespace.
-
outboundobject optionalservicestring required The name of the outbound Worker to bind to.parametersarray optional A list of parameters to pass data from your dynamic dispatch Worker to the outbound Worker.
{ "dispatch_namespaces": [ { "binding": "<BINDING_NAME>", "namespace": "<NAMESPACE_NAME>", "outbound": { "service": "<WORKER_NAME>", "parameters": [ "params_object" ] } } ]}[[dispatch_namespaces]]binding = "<BINDING_NAME>"namespace = "<NAMESPACE_NAME>"outbound = {service = "<WORKER_NAME>", parameters = ["params_object"]}Durable Objects provide low-latency coordination and consistent storage for the Workers platform.
To bind Durable Objects to your Worker, assign an array of the below object to the durable_objects.bindings key.
-
namestring required- The name of the binding used to refer to the Durable Object.
-
class_namestring required- The exported class name of the Durable Object.
-
script_namestring optional- The name of the Worker where the Durable Object is defined, if it is external to this Worker. This option can be used both in local and remote development. In local development, you must run the external Worker in a separate process (via
wrangler dev). In remote development, the appropriate remote binding must be used.
- The name of the Worker where the Durable Object is defined, if it is external to this Worker. This option can be used both in local and remote development. In local development, you must run the external Worker in a separate process (via
-
environmentstring optional- The environment of the
script_nameto bind to.
- The environment of the
Example:
{ "durable_objects": { "bindings": [ { "name": "<BINDING_NAME>", "class_name": "<CLASS_NAME>" } ] }}[[durable_objects.bindings]]name = "<BINDING_NAME>"class_name = "<CLASS_NAME>"When making changes to your Durable Object classes, you must perform a migration. Refer to Durable Object migrations.
-
tagstring required- A unique identifier for this migration.
-
new_sqlite_classesstring[] optional- The new Durable Objects being defined.
-
renamed_classes{from: string, to: string}[] optional- The Durable Objects being renamed.
-
deleted_classesstring[] optional- The Durable Objects being removed.
Example:
{ "migrations": [ { "tag": "v1", "new_sqlite_classes": [ "DurableObjectExample" ] }, { "tag": "v2", "renamed_classes": [ { "from": "DurableObjectExample", "to": "UpdatedName" } ], "deleted_classes": [ "DeprecatedClass" ] } ]}[[migrations]]tag = "v1" # Should be unique for each entrynew_sqlite_classes = ["DurableObjectExample"] # Array of new classes
[[migrations]]tag = "v2"renamed_classes = [{from = "DurableObjectExample", to = "UpdatedName" }] # Array of rename directivesdeleted_classes = ["DeprecatedClass"] # Array of deleted class namesYou can send an email about your Worker's activity from your Worker to an email address verified on Email Routing. This is useful for when you want to know about certain types of events being triggered, for example.
Before you can bind an email address to your Worker, you need to enable Email Routing and have at least one verified email address. Then, assign an array to the object (send_email) with the type of email binding you need.
-
namestring required- The binding name.
-
destination_addressstring optional- The chosen email address you send emails to.
-
allowed_destination_addressesstring[] optional- The allowlist of email addresses you send emails to.
You can add one or more types of bindings to your Wrangler file. However, each attribute must be on its own line:
{ "send_email": [ { "name": "<NAME_FOR_BINDING1>" }, { "name": "<NAME_FOR_BINDING2>", "destination_address": "<YOUR_EMAIL>@example.com" }, { "name": "<NAME_FOR_BINDING3>", "allowed_destination_addresses": [ "<YOUR_EMAIL>@example.com", "<YOUR_EMAIL2>@example.com" ] } ]}send_email = [ {name = "<NAME_FOR_BINDING1>"}, {name = "<NAME_FOR_BINDING2>", destination_address = "<YOUR_EMAIL>@example.com"}, {name = "<NAME_FOR_BINDING3>", allowed_destination_addresses = ["<YOUR_EMAIL>@example.com", "<YOUR_EMAIL2>@example.com"]},]Environment variables are a type of binding that allow you to attach text strings or JSON values to your Worker.
Example:
{ "name": "my-worker-dev", "vars": { "API_HOST": "example.com", "API_ACCOUNT_ID": "example_user", "SERVICE_X_DATA": { "URL": "service-x-api.dev.example", "MY_ID": 123 } }}name = "my-worker-dev"
[vars]API_HOST = "example.com"API_ACCOUNT_ID = "example_user"SERVICE_X_DATA = { URL = "service-x-api.dev.example", MY_ID = 123 }Hyperdrive bindings allow you to interact with and query any Postgres database from within a Worker.
-
bindingstring required- The binding name.
-
idstring required- The ID of the Hyperdrive configuration.
Example:
{ "compatibility_flags": [ "nodejs_compat_v2" ], "hyperdrive": [ { "binding": "<BINDING_NAME>", "id": "<ID>" } ]}# required for database drivers to functioncompatibility_flags = ["nodejs_compat_v2"]
[[hyperdrive]]binding = "<BINDING_NAME>"id = "<ID>"Cloudflare Images lets you make transformation requests to optimize, resize, and manipulate images stored in remote sources.
To bind Images to your Worker, assign an array of the below object to the images key.
binding (required). The name of the binding used to refer to the Images API.
{ "images": { "binding": "IMAGES", // i.e. available in your Worker on env.IMAGES },}[images]binding = "IMAGES"Workers KV is a global, low-latency, key-value data store. It stores data in a small number of centralized data centers, then caches that data in Cloudflare’s data centers after access.
To bind KV namespaces to your Worker, assign an array of the below object to the kv_namespaces key.
-
bindingstring required- The binding name used to refer to the KV namespace.
-
idstring required- The ID of the KV namespace.
-
preview_idstring optional- The preview ID of this KV namespace. This option is required when using
wrangler dev --remoteto develop against remote resources. If developing locally (without--remote), this is an optional field.wrangler devwill use this ID for the KV namespace. Otherwise,wrangler devwill useid.
- The preview ID of this KV namespace. This option is required when using
Example:
{ "kv_namespaces": [ { "binding": "<BINDING_NAME1>", "id": "<NAMESPACE_ID1>" }, { "binding": "<BINDING_NAME2>", "id": "<NAMESPACE_ID2>" } ]}[[kv_namespaces]]binding = "<BINDING_NAME1>"id = "<NAMESPACE_ID1>"
[[kv_namespaces]]binding = "<BINDING_NAME2>"id = "<NAMESPACE_ID2>"Queues is Cloudflare's global message queueing service, providing guaranteed delivery and message batching. To interact with a queue with Workers, you need a producer Worker to send messages to the queue and a consumer Worker to pull batches of messages out of the Queue. A single Worker can produce to and consume from multiple Queues.
To bind Queues to your producer Worker, assign an array of the below object to the [[queues.producers]] key.
-
queuestring required- The name of the queue, used on the Cloudflare dashboard.
-
bindingstring required- The binding name used to refer to the queue in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
binding = "MY_QUEUE"orbinding = "productionQueue"would both be valid names for the binding.
- The binding name used to refer to the queue in your Worker. The binding must be a valid JavaScript variable name ↗. For example,
-
delivery_delaynumber optional- The number of seconds to delay messages sent to a queue for by default. This can be overridden on a per-message or per-batch basis.
Example:
{ "queues": { "producers": [ { "binding": "<BINDING_NAME>", "queue": "<QUEUE_NAME>", "delivery_delay": 60 } ] }}[[queues.producers]] binding = "<BINDING_NAME>" queue = "<QUEUE_NAME>" delivery_delay = 60 # Delay messages by 60 seconds before they are delivered to a consumerTo bind Queues to your consumer Worker, assign an array of the below object to the [[queues.consumers]] key.
-
queuestring required- The name of the queue, used on the Cloudflare dashboard.
-
max_batch_sizenumber optional- The maximum number of messages allowed in each batch.
-
max_batch_timeoutnumber optional- The maximum number of seconds to wait for messages to fill a batch before the batch is sent to the consumer Worker.
-
max_retriesnumber optional- The maximum number of retries for a message, if it fails or
retryAll()is invoked.
- The maximum number of retries for a message, if it fails or
-
dead_letter_queuestring optional- The name of another queue to send a message if it fails processing at least
max_retriestimes. - If a
dead_letter_queueis not defined, messages that repeatedly fail processing will be discarded. - If there is no queue with the specified name, it will be created automatically.
- The name of another queue to send a message if it fails processing at least
-
max_concurrencynumber optional- The maximum number of concurrent consumers allowed to run at once. Leaving this unset will mean that the number of invocations will scale to the currently supported maximum.
- Refer to Consumer concurrency for more information on how consumers autoscale, particularly when messages are retried.
-
retry_delaynumber optional- The number of seconds to delay retried messages for by default, before they are re-delivered to the consumer. This can be overridden on a per-message or per-batch basis when retrying messages.
Example:
{ "queues": { "consumers": [ { "queue": "my-queue", "max_batch_size": 10, "max_batch_timeout": 30, "max_retries": 10, "dead_letter_queue": "my-queue-dlq", "max_concurrency": 5, "retry_delay": 120 } ] }}[[queues.consumers]] queue = "my-queue" max_batch_size = 10 max_batch_timeout = 30 max_retries = 10 dead_letter_queue = "my-queue-dlq" max_concurrency = 5 retry_delay = 120 # Delay retried messages by 2 minutes before re-attempting deliveryCloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
To bind R2 buckets to your Worker, assign an array of the below object to the r2_buckets key.
-
bindingstring required- The binding name used to refer to the R2 bucket.
-
bucket_namestring required- The name of this R2 bucket.
-
jurisdictionstring optional- The jurisdiction where this R2 bucket is located, if a jurisdiction has been specified. Refer to Jurisdictional Restrictions.
-
preview_bucket_namestring optional- The preview name of this R2 bucket. If provided,
wrangler devwill use this name for the R2 bucket. Otherwise, it will usebucket_name. This option is required when usingwrangler dev --remote.
- The preview name of this R2 bucket. If provided,
Example:
{ "r2_buckets": [ { "binding": "<BINDING_NAME1>", "bucket_name": "<BUCKET_NAME1>" }, { "binding": "<BINDING_NAME2>", "bucket_name": "<BUCKET_NAME2>" } ]}[[r2_buckets]]binding = "<BINDING_NAME1>"bucket_name = "<BUCKET_NAME1>"
[[r2_buckets]]binding = "<BINDING_NAME2>"bucket_name = "<BUCKET_NAME2>"A Vectorize index allows you to insert and query vector embeddings for semantic search, classification and other vector search use-cases.
To bind Vectorize indexes to your Worker, assign an array of the below object to the vectorize key.
-
bindingstring required- The binding name used to refer to the bound index from your Worker code.
-
index_namestring required- The name of the index to bind.
Example:
{ "vectorize": [ { "binding": "<BINDING_NAME>", "index_name": "<INDEX_NAME>" } ]}[[vectorize]]binding = "<BINDING_NAME>"index_name = "<INDEX_NAME>"A service binding allows you to send HTTP requests to another Worker without those requests going over the Internet. The request immediately invokes the downstream Worker, reducing latency as compared to a request to a third-party service. Refer to About Service Bindings.
To bind other Workers to your Worker, assign an array of the below object to the services key.
-
bindingstring required- The binding name used to refer to the bound Worker.
-
servicestring required- The name of the Worker.
- To bind to a Worker in a specific environment, you need to append the environment name to the Worker name. This should be in the format
<worker-name>-<environment-name>. For example, to bind to a Worker calledworker-namein itsstagingenvironment,serviceshould be set toworker-name-staging.
-
entrypointstring optional- The name of the entrypoint to bind to. If you do not specify an entrypoint, the default export of the Worker will be used.
Example:
{ "services": [ { "binding": "<BINDING_NAME>", "service": "<WORKER_NAME>", "entrypoint": "<ENTRYPOINT_NAME>" } ]}[[services]]binding = "<BINDING_NAME>"service = "<WORKER_NAME>"entrypoint = "<ENTRYPOINT_NAME>"Refer to Assets.
Workers Analytics Engine provides analytics, observability and data logging from Workers. Write data points to your Worker binding then query the data using the SQL API.
To bind Analytics Engine datasets to your Worker, assign an array of the below object to the analytics_engine_datasets key.
-
bindingstring required- The binding name used to refer to the dataset.
-
datasetstring optional- The dataset name to write to. This will default to the same name as the binding if it is not supplied.
Example:
{ "analytics_engine_datasets": [ { "binding": "<BINDING_NAME>", "dataset": "<DATASET_NAME>" } ]}[[analytics_engine_datasets]]binding = "<BINDING_NAME>"dataset = "<DATASET_NAME>"To communicate with origins that require client authentication, a Worker can present a certificate for mTLS in subrequests. Wrangler provides the mtls-certificate command to upload and manage these certificates.
To create a binding to an mTLS certificate for your Worker, assign an array of objects with the following shape to the mtls_certificates key.
-
bindingstring required- The binding name used to refer to the certificate.
-
certificate_idstring required- The ID of the certificate. Wrangler displays this via the
mtls-certificate uploadandmtls-certificate listcommands.
- The ID of the certificate. Wrangler displays this via the
Example of a Wrangler configuration file that includes an mTLS certificate binding:
{ "mtls_certificates": [ { "binding": "<BINDING_NAME1>", "certificate_id": "<CERTIFICATE_ID1>" }, { "binding": "<BINDING_NAME2>", "certificate_id": "<CERTIFICATE_ID2>" } ]}[[mtls_certificates]]binding = "<BINDING_NAME1>"certificate_id = "<CERTIFICATE_ID1>"
[[mtls_certificates]]binding = "<BINDING_NAME2>"certificate_id = "<CERTIFICATE_ID2>"mTLS certificate bindings can then be used at runtime to communicate with secured origins via their fetch method.
Workers AI allows you to run machine learning models, on the Cloudflare network, from your own code – whether that be from Workers, Pages, or anywhere via REST API.
Unlike other bindings, this binding is limited to one AI binding per Worker project.
-
bindingstring required- The binding name.
Example:
{ "ai": { "binding": "AI" }}[ai]binding = "AI" # available in your Worker code on `env.AI`Static assets allows developers to run front-end websites on Workers. You can configure the directory of assets, an optional runtime binding, and routing configuration options.
You can only configure one collection of assets per Worker.
The following options are available under the assets key.
-
directorystring optional- Folder of static assets to be served.
- Not required if you're using the Cloudflare Vite plugin, which will automatically point to the client build output.
-
bindingstring optional- The binding name used to refer to the assets. Optional, and only useful when a Worker script is set with
main.
- The binding name used to refer to the assets. Optional, and only useful when a Worker script is set with
-
run_worker_firstboolean | string[] optional, defaults to false- Controls whether static assets are fetched directly, or a Worker script is invoked. Can be a boolean (
true/false) or an array of route pattern strings with support for glob patterns (*) and exception patterns (!prefix). Patterns must begin with/or!/. Learn more about fetching assets when usingrun_worker_first.
- Controls whether static assets are fetched directly, or a Worker script is invoked. Can be a boolean (
-
html_handling: "auto-trailing-slash" | "force-trailing-slash" | "drop-trailing-slash" | "none" optional, defaults to "auto-trailing-slash"- Determines the redirects and rewrites of requests for HTML content. Learn more about the various options in assets routing.
-
not_found_handling: "single-page-application" | "404-page" | "none" optional, defaults to "none"- Determines the handling of requests that do not map to an asset. Learn more about the various options for routing behavior.
Example:
{ "assets": { "directory": "./public", "binding": "ASSETS", "html_handling": "force-trailing-slash", "not_found_handling": "404-page" }}assets = { directory = "./public", binding = "ASSETS", html_handling = "force-trailing-slash", not_found_handling = "404-page" }You can also configure run_worker_first with an array of route patterns:
{ "assets": { "directory": "./public", "binding": "ASSETS", "run_worker_first": [ "/api/*", "!/api/docs/*" ] }}[assets]directory = "./public"binding = "ASSETS"run_worker_first = [ "/api/*", # API calls go to Worker first "!/api/docs/*" # EXCEPTION: For /api/docs/*, try static assets first]You can define Containers to run alongside your Worker using the containers field.
The following options are available:
-
imagestring required- The image to use for the container. This can either be a local path, in which case
wrangler deploywill build and push the image. Or it can be an image URL. Currently, only the Cloudflare Registry is a supported registry.
- The image to use for the container. This can either be a local path, in which case
-
class_namestring required- The corresponding Durable Object class name. This will make this Durable Object a container-enabled Durable Object and allow each instance to control a container. See Durable Object Container Methods for details.
-
instance_typestring optional- The instance type of the container. This determines the amount of memory, CPU, and disk given to the container
instance. The current options are
"dev","basic", and"standard". The default is"dev". For more information, the see instance types documentation.
- The instance type of the container. This determines the amount of memory, CPU, and disk given to the container
instance. The current options are
-
max_instancesstring optional- The maxiumum number of concurrent container instances you want to run. If you have more Durable Objects that request to run a container than this number, the container request will error. You may have more Durable Objects than this number over a longer time period, but you may not have more concurrently.
-
namestring optional- The name of your container. Used as an identifier. This will default to a comination of your Worker name, the class name, and your environment.
-
image_build_contextstring optional- The build context of the application, by default it is the directory of
image.
- The build context of the application, by default it is the directory of
-
image_varsRecord<string, string> optional- Environment variables to set in the container. These will be used during the build phase and will be defaults when running the container, though they can be overridden at runtime when starting the container.
{ "containers": [ { "class_name": "MyContainer", "image": "./Dockerfile", "max_instances": 10, "instance_type": "basic", "image_vars": { "FOO": "BAR" } } ], "durable_objects": { "bindings": [ { "name": "MY_CONTAINER", "class_name": "MyContainer" } ] }, "migrations": [ { "tag": "v1", "new_sqlite_classes": [ "MyContainer" ] } ]}[[containers]]class_name = "MyContainer"image = "./Dockerfile"max_instances = 10instance_type = "basic" # Optional, defaults to "dev"image_vars = { FOO = "BAR" }
[[durable_objects.bindings]]name = "MY_CONTAINER"class_name = "MyContainer"
[[migrations]]tag = "v1"new_sqlite_classes = ["MyContainer"]Wrangler can operate in two modes: the default bundling mode and --no-bundle mode.
In bundling mode, Wrangler will traverse all the imports of your code and generate a single JavaScript "entry-point" file.
Imported source code is "inlined/bundled" into this entry-point file.
It is also possible to include additional modules into your Worker, which are uploaded alongside the entry-point.
You specify which additional modules should be included into your Worker using the rules key, making these modules available to be imported when your Worker is invoked.
The rules key will be an array of the below object.
-
typestring required- The type of module. Must be one of:
ESModule,CommonJS,CompiledWasm,TextorData.
- The type of module. Must be one of:
-
globsstring[] required- An array of glob rules (for example,
["**/*.md"]). Refer to glob ↗.
- An array of glob rules (for example,
-
fallthroughboolean optional- When set to
trueon a rule, this allows you to have multiple rules for the sameType.
- When set to
Example:
{ "rules": [ { "type": "Text", "globs": [ "**/*.md" ], "fallthrough": true } ]}rules = [ { type = "Text", globs = ["**/*.md"], fallthrough = true }]You can import and refer to these modules within your Worker, like so:
import markdown from "./example.md";
export default { async fetch() { return new Response(markdown); },};Normally Wrangler will only include additional modules that are statically imported in your source code as in the example above.
By setting find_additional_modules to true in your configuration file, Wrangler will traverse the file tree below base_dir.
Any files that match rules will also be included as unbundled, external modules in the deployed Worker.
base_dir defaults to the directory containing your main entrypoint.
See https://developers.cloudflare.com/workers/wrangler/bundling/ ↗ for more details and examples.
You can configure various aspects of local development, such as the local protocol or port.
ipstring optional
- IP address for the local dev server to listen on. Defaults to
localhost.
portnumber optional
- Port for the local dev server to listen on. Defaults to
8787.
-
local_protocolstring optional- Protocol that local dev server listens to requests on. Defaults to
http.
- Protocol that local dev server listens to requests on. Defaults to
-
upstream_protocolstring optional- Protocol that the local dev server forwards requests on. Defaults to
https.
- Protocol that the local dev server forwards requests on. Defaults to
-
hoststring optional- Host to forward requests to, defaults to the host of the first
routeof the Worker.
- Host to forward requests to, defaults to the host of the first
Example:
{ "dev": { "ip": "192.168.1.1", "port": 8080, "local_protocol": "http" }}[dev]ip = "192.168.1.1"port = 8080local_protocol = "http"Secrets are a type of binding that allow you to attach encrypted text values to your Worker.
When developing your Worker or Pages Function, create a .dev.vars file in the root of your project to define secrets that will be used when running wrangler dev or wrangler pages dev, as opposed to using environment variables in the Wrangler configuration file. This works both in local and remote development modes.
The .dev.vars file should be formatted like a dotenv file, such as KEY="VALUE":
SECRET_KEY="value"API_TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"To set different secrets for each environment, create files named .dev.vars.<environment-name>. When you use wrangler <command> --env <environment-name>, the corresponding environment-specific file will be loaded instead of the .dev.vars file.
Like other environment variables, secrets are non-inheritable and must be defined per environment.
You can configure Wrangler to replace all calls to import a particular package with a module of your choice, by configuring the alias field:
{ "alias": { "foo": "./replacement-module-filepath" }}[alias]"foo" = "./replacement-module-filepath"export const bar = "baz";With the configuration above, any calls to import or require() the module foo will be aliased to point to your replacement module:
import { bar } from "foo";
console.log(bar); // returns "baz"You can use module aliasing to provide an implementation of an NPM package that does not work on Workers — even if you only rely on that NPM package indirectly, as a dependency of one of your Worker's dependencies.
For example, some NPM packages depend on node-fetch ↗, a package that provided a polyfill of the fetch() API, before it was built into Node.js.
node-fetch isn't needed in Workers, because the fetch() API is provided by the Workers runtime. And node-fetch doesn't work on Workers, because it relies on currently unsupported Node.js APIs from the http/https modules.
You can alias all imports of node-fetch to instead point directly to the fetch() API that is built into the Workers runtime:
{ "alias": { "node-fetch": "./fetch-nolyfill" }}[alias]"node-fetch" = "./fetch-nolyfill"export default fetch;You can use module aliasing to provide your own polyfill implementation of a Node.js API that is not yet available in the Workers runtime.
For example, let's say the NPM package you rely on calls fs.readFile ↗. You can alias the fs module by adding the following to your Worker's Wrangler configuration file:
{ "alias": { "fs": "./fs-polyfill" }}[alias]"fs" = "./fs-polyfill"export function readFile() { // ...}In many cases, this allows you to work provide just enough of an API to make a dependency work. You can learn more about Cloudflare Workers' support for Node.js APIs on the Cloudflare Workers Node.js API documentation page.
Source maps translate compiled and minified code back to the original code that you wrote. Source maps are combined with the stack trace returned by the JavaScript runtime to present you with a stack trace.
-
upload_source_mapsboolean- When
upload_source_mapsis set totrue, Wrangler will automatically generate and upload source map files when you runwrangler deployorwrangler versions deploy.
- When
Example:
{ "upload_source_maps": true}upload_source_maps = trueWorkers Sites allows you to host static websites, or dynamic websites using frameworks like Vue or React, on Workers.
-
bucketstring required- The directory containing your static assets. It must be a path relative to your Wrangler configuration file.
-
includestring[] optional- An exclusive list of
.gitignore-style patterns that match file or directory names from your bucket location. Only matched items will be uploaded.
- An exclusive list of
-
excludestring[] optional- A list of
.gitignore-style patterns that match files or directories in your bucket that should be excluded from uploads.
- A list of
Example:
{ "site": { "bucket": "./public", "include": [ "upload_dir" ], "exclude": [ "ignore_dir" ] }}[site]bucket = "./public"include = ["upload_dir"]exclude = ["ignore_dir"]Corporate networks will often have proxies on their networks and this can sometimes cause connectivity issues. To configure Wrangler with the appropriate proxy details, add the following environmental variables:
https_proxyHTTPS_PROXYhttp_proxyHTTP_PROXY
To configure this on macOS, add HTTP_PROXY=http://<YOUR_PROXY_HOST>:<YOUR_PROXY_PORT> before your Wrangler commands.
Example:
$ HTTP_PROXY=http://localhost:8080 wrangler devIf your IT team has configured your computer's proxy settings, be aware that the first non-empty environment variable in this list will be used when Wrangler makes outgoing requests.
For example, if both https_proxy and http_proxy are set, Wrangler will only use https_proxy for outgoing requests.
We recommend treating your Wrangler configuration file as the source of truth for your Worker configuration, and to avoid making changes to your Worker via the Cloudflare dashboard if you are using Wrangler.
If you need to make changes to your Worker from the Cloudflare dashboard, the dashboard will generate a TOML snippet for you to copy into your Wrangler configuration file, which will help ensure your Wrangler configuration file is always up to date.
If you change your environment variables in the Cloudflare dashboard, Wrangler will override them the next time you deploy. If you want to disable this behavior, add keep_vars = true to your Wrangler configuration file.
If you change your routes in the dashboard, Wrangler will override them in the next deploy with the routes you have set in your Wrangler configuration file. To manage routes via the Cloudflare dashboard only, remove any route and routes keys from your Wrangler configuration file. Then add workers_dev = false to your Wrangler configuration file. For more information, refer to Deprecations.
Wrangler will not delete your secrets (encrypted environment variables) unless you run wrangler secret delete <key>.
Some framework tools, or custom pre-build processes, generate a modified Wrangler configuration to be used to deploy the Worker code.
In this case, the tool may also create a special .wrangler/deploy/config.json file that redirects Wrangler to use the generated configuration rather than the original, user's configuration.
Wrangler uses this generated configuration only for the following deploy and dev related commands:
wrangler deploywrangler devwrangler versions uploadwrangler versions deploywrangler pages deploywrangler pages functions build
When running these commands, Wrangler looks up the directory tree from the current working directory for a file at the path .wrangler/deploy/config.json.
This file must contain only a single JSON object of the form:
{ "configPath": "../../path/to/wrangler.jsonc" }When this config.json file exists, Wrangler will follow the configPath (relative to the .wrangler/deploy/config.json file) to find the generated Wrangler configuration file to load and use in the current command.
Wrangler will display messaging to the user to indicate that the configuration has been redirected to a different file than the user's configuration file.
A common example of using a redirected configuration is where a custom build tool, or framework, wants to modify the user's configuration to be used when deploying, by generating a new configuration in a dist directory.
- First, the user writes code that uses Cloudflare Workers resources, configured via a user's Wrangler configuration file.
{ "name": "my-worker", "main": "src/index.ts", "kv_namespaces": [ { "binding": "<BINDING_NAME1>", "id": "<NAMESPACE_ID1>" } ]}name = "my-worker"main = "src/index.ts"[[kv_namespaces]]binding = "<BINDING_NAME1>"id = "<NAMESPACE_ID1>"Note that this configuration points main at the user's code entry-point.
-
Then, the user runs a custom build, which might read the user's Wrangler configuration file to find the source code entry-point:
Terminal window > my-tool build -
This
my-toolgenerates adistdirectory that contains both compiled code and a new generated deployment configuration file. It also creates a.wrangler/deploy/config.jsonfile that redirects Wrangler to the new, generated deployment configuration file:Directorydist
- index.js
- wrangler.jsonc
Directory.wrangler
Directorydeploy
- config.json
The generated
dist/wrangler.jsoncmight contain:{"name": "my-worker","main": "./index.js","kv_namespaces": [{ "binding": "<BINDING_NAME1>", "id": "<NAMESPACE_ID1>" }]}Note that, now, the
mainproperty points to the generated code entry-point.And the
.wrangler/deploy/config.jsoncontains the path to the generated configuration file:{"configPath": "../../dist/wrangler.jsonc"}
Was this helpful?
- Resources
- API
- New to Cloudflare?
- Products
- Sponsorships
- Open Source
- Support
- Help Center
- System Status
- Compliance
- GDPR
- Company
- cloudflare.com
- Our team
- Careers
- 2025 Cloudflare, Inc.
- Privacy Policy
- Terms of Use
- Report Security Issues
- Trademark