Both the VS Code Extension and the Malloy CLI use a malloy-config.json file to configure database connections. This page documents the shared file format, connection types, and environment variable options.
Config File Format
The file contains a connections object where each key is a connection name and each value specifies the connection type and its parameters. The first connection listed is the default.
{ "connections": { "my_duckdb": { "is": "duckdb", "databasePath": "./data.db" }, "my_bq": { "is": "bigquery", "projectId": "my-project" } } }
The is field identifies the connection type. All other fields are type-specific parameters documented below.
Default connections
Setting includeDefaultConnections to true automatically creates a connection for every registered database type that you haven't already listed. Each gets the type's name as its connection name and picks up any registered property defaults:
{ "includeDefaultConnections": true, "connections": { "warehouse": { "is": "duckdb", "databasePath": "./data.db" } } }
In this example, you get your explicit warehouse connection plus auto-generated connections for bigquery, postgres, snowflake, etc. — each with default settings. A generated connection is skipped if its type or name collides with one you already listed.
Config file discovery
Both VS Code and the CLI discover config files automatically. Starting from the Malloy file you're working with, Malloy walks up through parent directories until it reaches the project root, looking for malloy-config.json at each level. The first one found is used.
You can also place a malloy-config-local.json alongside malloy-config.json for developer-specific overrides (credentials, local paths) that shouldn't be checked into git. When both files exist at the same directory level, they are merged: connections entries are deep-merged by name (local wins on conflicts), while all other sections (manifestPath, includeDefaultConnections, etc.) are replaced wholesale by the local file's value.
Add malloy-config-local.json to your .gitignore:
malloy-config-local.jsonConnection Types
duckdb — DuckDB / MotherDuck
| Parameter | Type | Description |
|---|---|---|
databasePath |
file | Path to .db file (default: :memory:) |
workingDirectory |
string | Working directory for relative paths. Default: the project root (the directory discovery walked up to) |
motherDuckToken |
secret | MotherDuck auth token. Default: {env: "MOTHERDUCK_TOKEN"} |
additionalExtensions |
string | Comma-separated DuckDB extensions to load (e.g. "spatial,fts"). Built-in: json, httpfs, icu |
readOnly |
boolean | Open database read-only |
setupSQL |
text | Connection setup SQL (see below) |
securityPolicy |
string | "none" (default), "local", or "sandboxed". See restricted execution |
allowedDirectories |
json | Array of directories DuckDB may read/write. Enforced when securityPolicy is "sandboxed" |
enableExternalAccess |
boolean | DuckDB's enable_external_access setting |
lockConfiguration |
boolean | Lock DuckDB config after setup |
autoloadKnownExtensions |
boolean | DuckDB autoload_known_extensions |
autoinstallKnownExtensions |
boolean | DuckDB autoinstall_known_extensions |
allowCommunityExtensions |
boolean | DuckDB allow_community_extensions |
allowUnsignedExtensions |
boolean | DuckDB allow_unsigned_extensions |
tempFileEncryption |
boolean | DuckDB temp_file_encryption |
threads |
number | DuckDB threads |
memoryLimit |
string | DuckDB memory_limit (e.g. "1GB") |
tempDirectory |
string | DuckDB temp_directory |
extensionDirectory |
string | DuckDB extension_directory |
Restricted execution
For untrusted code, Malloy offers a single securityPolicy property with three levels:
"none"— no security policy applied. Ordinary DuckDB behavior. This is the default."local"— no network access. DuckDB cannot reach the network, but local filesystem access is not sandboxed to specific directories. Appropriate when the host already provides filesystem isolation (e.g. a container boundary)."sandboxed"— no network access AND filesystem confined toallowedDirectories(defaults toworkingDirectory). The reviewed strict recipe for untrusted Malloy. POSIX only.
Both "local" and "sandboxed" force enableExternalAccess=false, block httpfs and INSTALL, reject remote databasePath and motherDuckToken, lock configuration, and encrypt temp files. "sandboxed" additionally enforces directory containment and derives a safe tempDirectory inside the sandbox.
DuckDB's enable_external_access is a single toggle that gates both filesystem reach and network reach. allowed_directories only takes effect when external access is disabled. This is why securityPolicy is a single axis — the underlying DuckDB mechanism does not support independent filesystem and network control.
The reviewed strict recipe:
{ "connections": { "duckdb": { "is": "duckdb", "databasePath": "data/app.duckdb", "workingDirectory": {"config": "rootDirectory"}, "securityPolicy": "sandboxed" } } }
Policies set a floor, not a ceiling. allowedDirectories and tempDirectory can be set explicitly to customize the sandbox. Other policy-controlled settings accept matching values but reject weaker ones — connection creation fails closed. setupSQL, additionalExtensions, motherDuckToken, and remote databasePath are incompatible with any restricted policy; to use them, keep securityPolicy at "none" and configure DuckDB directly. Policies do not set resource limits — configure threads, memoryLimit, timeouts, and host quotas separately.
bigquery — Google BigQuery
| Parameter | Type | Description |
|---|---|---|
projectId |
string | GCP project ID |
serviceAccountKeyPath |
file | Path to service account JSON key |
serviceAccountKey |
json | Service account key as a JSON object (alternative to file path) |
location |
string | Dataset location |
maximumBytesBilled |
string | Byte billing cap |
timeoutMs |
string | Query timeout in ms |
billingProjectId |
string | Billing project (if different) |
setupSQL |
text | Connection setup SQL (see below) |
databricks — Databricks
| Parameter | Type | Description |
|---|---|---|
host |
string | Workspace hostname (e.g. my-workspace.cloud.databricks.com) |
path |
string | SQL warehouse HTTP path (e.g. /sql/1.0/warehouses/abc123) |
token |
secret | Personal access token (optional if using OAuth) |
oauthClientId |
string | OAuth M2M client ID |
oauthClientSecret |
secret | OAuth M2M client secret |
defaultCatalog |
string | Default Unity Catalog name |
defaultSchema |
string | Default schema name |
setupSQL |
text | Connection setup SQL (see below) |
Authentication: provide either token or the oauthClientId + oauthClientSecret pair.
postgres — PostgreSQL
| Parameter | Type | Description |
|---|---|---|
host |
string | Server host |
port |
number | Server port |
username |
string | Username |
password |
password | Password |
databaseName |
string | Database name |
connectionString |
string | Full connection string (alternative) |
setupSQL |
text | Connection setup SQL (see below) |
mysql — MySQL
| Parameter | Type | Description |
|---|---|---|
host |
string | Server host. Default: {env: "MYSQL_HOST"} |
port |
number | Server port. Default: {env: "MYSQL_PORT"} or 3306 |
database |
string | Database name. Default: {env: "MYSQL_DATABASE"} |
user |
string | Username. Default: {env: "MYSQL_USER"} |
password |
password | Password. Default: {env: "MYSQL_PASSWORD"} |
setupSQL |
text | Connection setup SQL (see below) |
snowflake — Snowflake
| Parameter | Type | Description |
|---|---|---|
account |
string | Snowflake account identifier (required). Default: {env: "SNOWFLAKE_ACCOUNT"} |
username |
string | Username. Default: {env: "SNOWFLAKE_USER"} |
password |
password | Password. Default: {env: "SNOWFLAKE_PASSWORD"} |
role |
string | Role |
warehouse |
string | Warehouse. Default: {env: "SNOWFLAKE_WAREHOUSE"} |
database |
string | Database. Default: {env: "SNOWFLAKE_DATABASE"} |
schema |
string | Schema. Default: {env: "SNOWFLAKE_SCHEMA"} |
privateKeyPath |
file | Path to private key (.pem/.key) |
privateKey |
password | Private key PEM string (alternative to file path) |
privateKeyPass |
password | Private key passphrase |
timeoutMs |
number | Query timeout in ms |
schemaSampleTimeoutMs |
number | Variant schema sample timeout in ms (default 15000) |
schemaSampleRowLimit |
number | Row limit for variant schema sample (default 1000) |
schemaSampleFullScanMaxBytes |
number | Tables at or below this byte size are full-scanned instead of sampled |
setupSQL |
text | Connection setup SQL (see below) |
Snowflake also supports TOML configuration at ~/.snowflake/connections.toml. See Snowflake connection configuration for details.
trino — Trino
| Parameter | Type | Description |
|---|---|---|
server |
string | Server hostname. Default: {env: "TRINO_SERVER"} |
port |
number | Server port |
catalog |
string | Catalog name. Default: {env: "TRINO_CATALOG"} |
schema |
string | Schema name. Default: {env: "TRINO_SCHEMA"} |
user |
string | Username. Default: {env: "TRINO_USER"} |
password |
password | Password. Default: {env: "TRINO_PASSWORD"} |
setupSQL |
text | Connection setup SQL (see below) |
source |
string | Source name sent to Trino (appears in query tracking) |
ssl |
json | TLS/SSL configuration object |
session |
json | Session properties as {"key": "value"} pairs |
extraCredential |
json | Extra credentials as {"key": "value"} pairs |
extraHeaders |
json | Additional HTTP headers as {"key": "value"} pairs |
The ssl, session, extraCredential, and extraHeaders properties accept JSON objects that are passed directly to the underlying trino-client library. For example, to connect to a Trino server with a self-signed certificate:
{ "connections": { "my_trino": { "is": "trino", "server": "https://trino.example.com", "catalog": "hive", "user": "analyst", "password": {"env": "TRINO_PASSWORD"}, "ssl": {"rejectUnauthorized": false} } } }
Because these properties use the json type, {"env": "..."} inside them is treated as literal data, not as an environment variable reference.
presto — Presto
| Parameter | Type | Description |
|---|---|---|
server |
string | Server hostname. Default: {env: "PRESTO_HOST"} |
port |
number | Server port (default: 8080). Default: {env: "PRESTO_PORT"} |
catalog |
string | Catalog name. Default: {env: "PRESTO_CATALOG"} |
schema |
string | Schema name. Default: {env: "PRESTO_SCHEMA"} |
user |
string | Username. Default: {env: "PRESTO_USER"} |
password |
password | Password. Default: {env: "PRESTO_PASSWORD"} |
setupSQL |
text | Connection setup SQL (see below) |
Setup SQL
All connection types support a setupSQL parameter. This is a multi-line text field containing SQL statements to execute each time the connection is established.
Each statement must end with ; at the end of a line. Statements can span multiple lines. Only one statement-ending ; is allowed per line.
Correct — each statement ends with ; on its own line:
SET search_path TO analytics; CREATE TEMP TABLE foo AS SELECT 1;
Incorrect — two statements on the same line:
SET search_path TO analytics; CREATE TEMP TABLE foo AS SELECT 1;Manifest Path
The config file can specify where persistence manifests are stored:
{ "connections": { ... }, "manifestPath": "MANIFESTS" }
The manifestPath property is optional and defaults to "MANIFESTS". The manifest file is <manifestPath>/malloy-manifest.json, relative to the config file's location. Both the builder (malloy-cli build) and the VS Code extension read the manifest from this path.
Environment Variables
Any property value can be replaced with an environment variable reference. This is especially useful for sensitive values like passwords and API tokens, so they don't need to appear directly in your config file:
{ "connections": { "my_motherduck": { "is": "duckdb", "databasePath": "md:my_database", "motherDuckToken": {"env": "MOTHERDUCK_TOKEN"} }, "my_postgres": { "is": "postgres", "host": "db.example.com", "password": {"env": "PG_PASSWORD"} } } }
The {"env": "VAR_NAME"} syntax looks up the value from the named environment variable at connection time. If the variable is not set, the field is omitted and the connection proceeds without it.
You can also provide a plain string value directly — this is useful for testing but not recommended for shared or committed config files.