Total Pageviews

Thursday, 30 April 2026

MinIO Quickstart Guide


MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.

This README provides quickstart instructions on running MinIO on baremetal hardware, including Docker-based installations. For Kubernetes environments, use the MinIO Kubernetes Operator.

Docker Installation

Use the following commands to run a standalone MinIO server on a Docker container.

Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Stable

Run the following command to run the latest stable image of MinIO on a Docker container using an ephemeral data volume:

docker run -p 9000:9000 minio/minio server /data

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: To deploy MinIO on Docker with persistent storage, you must map local persistent directories from the host OS to the container using the docker -v option. For example, -v /mnt/data:/data maps the host OS drive at /mnt/data to /data on the Docker container.

Edge

Run the following command to run the bleeding-edge image of MinIO on a Docker container using an ephemeral data volume:

docker run -p 9000:9000 minio/minio:edge server /data

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: To deploy MinIO on Docker with persistent storage, you must map local persistent directories from the host OS to the container using the docker -v option. For example, -v /mnt/data:/data maps the host OS drive at /mnt/data to /data on the Docker container.

macOS

Use the following commands to run a standalone MinIO server on macOS.

Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Homebrew (recommended)

Run the following command to install the latest stable MinIO package using Homebrew. Replace /data with the path to the drive or directory in which you want MinIO to store data.

brew install minio/stable/minio
minio server /data

NOTE: If you previously installed minio using brew install minio then it is recommended that you reinstall minio from minio/stable/minio official repo instead.

brew uninstall minio
brew install minio/stable/minio

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

Binary Download

Use the following command to download and run a standalone MinIO server on macOS. Replace /data with the path to the drive or directory in which you want MinIO to store data.

wget https://dl.min.io/server/minio/release/darwin-amd64/minio
chmod +x minio
./minio server /data

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

GNU/Linux

Use the following command to run a standalone MinIO server on Linux hosts running 64-bit Intel/AMD architectures. Replace /data with the path to the drive or directory in which you want MinIO to store data.

wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
./minio server /data

Replace /data with the path to the drive or directory in which you want MinIO to store data.

The following table lists supported architectures. Replace the wget URL with the architecture for your Linux host.

Architecture URL
64-bit Intel/AMD https://dl.min.io/server/minio/release/linux-amd64/minio
64-bit ARM https://dl.min.io/server/minio/release/linux-arm64/minio
64-bit PowerPC LE (ppc64le) https://dl.min.io/server/minio/release/linux-ppc64le/minio
IBM Z-Series (S390X) https://dl.min.io/server/minio/release/linux-s390x/minio

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Microsoft Windows

To run MinIO on 64-bit Windows hosts, download the MinIO executable from the following URL:

https://dl.min.io/server/minio/release/windows-amd64/minio.exe

Use the following command to run a standalone MinIO server on the Windows host. Replace D:\ with the path to the drive or directory in which you want MinIO to store data. You must change the terminal or powershell directory to the location of the minio.exe executable, or add the path to that directory to the system $PATH:

minio.exe server D:\

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

FreeBSD

MinIO does not provide an official FreeBSD binary. However, FreeBSD maintains an upstream release using pkg:

pkg install minio
sysrc minio_enable=yes
sysrc minio_disks=/home/user/Photos
service minio start

Install from Source

Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow How to install Golang. Minimum version required is go1.15

GO111MODULE=on go get github.com/minio/minio

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

MinIO strongly recommends against using compiled-from-source MinIO servers for production environments.

Deployment Recommendations

Allow port access for Firewalls

By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.

ufw

For hosts with ufw enabled (Debian based distros), you can use ufw command to allow traffic to specific ports. Use below command to allow access to port 9000

ufw allow 9000

Below command enables all incoming traffic to ports ranging from 9000 to 9010.

ufw allow 9000:9010/tcp

firewall-cmd

For hosts with firewall-cmd enabled (CentOS), you can use firewall-cmd command to allow traffic to specific ports. Use below commands to allow access to port 9000

firewall-cmd --get-active-zones

This command gets the active zone(s). Now, apply port rules to the relevant zones returned above. For example if the zone is public, use

firewall-cmd --zone=public --add-port=9000/tcp --permanent

Note that permanent makes sure the rules are persistent across firewall start, restart or reload. Finally reload the firewall for changes to take effect.

firewall-cmd --reload

iptables

For hosts with iptables enabled (RHEL, CentOS, etc), you can use iptables command to enable all traffic coming to specific ports. Use below command to allow access to port 9000

iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
service iptables restart

Below command enables all incoming traffic to ports ranging from 9000 to 9010.

iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart

Pre-existing data

When deployed on a single drive, MinIO server lets clients access any pre-existing data in the data directory. For example, if MinIO is started with the command minio server /mnt/data, any pre-existing data in the /mnt/data directory would be accessible to the clients.

The above statement is also valid for all gateway backends.

Test MinIO Connectivity

Test using MinIO Browser

MinIO Server comes with an embedded web based object browser. Point your web browser to http://127.0.0.1:9000 to ensure your server has started successfully.

Screenshot

Test using MinIO Client mc

mc provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the MinIO Client Quickstart Guide for further instructions.

Upgrading MinIO

MinIO server supports rolling upgrades, i.e. you can update one MinIO instance at a time in a distributed cluster. This allows upgrades with no downtime. Upgrades can be done manually by replacing the binary with the latest release and restarting all servers in a rolling fashion. However, we recommend all our users to use mc admin update from the client. This will update all the nodes in the cluster simultaneously and restart them, as shown in the following command from the MinIO client (mc):

mc admin update <minio alias, e.g., myminio>

NOTE: some releases might not allow rolling upgrades, this is always called out in the release notes and it is generally advised to read release notes before upgrading. In such a situation mc admin update is the recommended upgrading mechanism to upgrade all servers at once.

Important things to remember during MinIO upgrades

  • mc admin update will only work if the user running MinIO has write access to the parent directory where the binary is located, for example if the current binary is at /usr/local/bin/minio, you would need write access to /usr/local/bin.
  • mc admin update updates and restarts all servers simultaneously, applications would retry and continue their respective operations upon upgrade.
  • mc admin update is disabled in kubernetes/container environments, container environments provide their own mechanisms to rollout of updates.
  • In the case of federated setups mc admin update should be run against each cluster individually. Avoid updating mc to any new releases until all clusters have been successfully updated.
  • If using kes as KMS with MinIO, just replace the binary and restart kes more information about kes can be found here
  • If using Vault as KMS with MinIO, ensure you have followed the Vault upgrade procedure outlined here: https://www.vaultproject.io/docs/upgrading/index.html
  • If using etcd with MinIO for the federation, ensure you have followed the etcd upgrade procedure outlined here: https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrading-etcd.md

Explore Further

Contribute to MinIO Project

Please follow MinIO Contributor's Guide

from  https://github.com/ciiiii/minio

( https://github.com/briteming/minio)

-------------------------------------------------

官方仓库: https://github.com/minio/minio

cloudflare-worker-starter-kit

 https://github.com/ciiiii/cloudflare-worker-starter-kit/generate

from  https://github.com/ciiiii/cloudflare-worker-starter-kit

 cloudflare worker入门.

(https://github.com/briteming/cloudflare-worker-starter-kit) 

cloudflare-helm-proxy

 A helm repo proxy run on cloudflare worker. 

A helm repo proxy run on cloudflare worker.

Deploy to Cloudflare Workers

If you're looking for proxy for docker, maybe you can try cloudflare-docker-proxy.

Rules example

  • request based on ${cloudflare_worker_route}/${key} will request to ${url}.
  • text of */index.yaml will be handled with replaces rules.
  • $host in replaces will be replace with cloudflare worker route.
const routes = {
  stable: {
    url: 'https://charts.helm.sh/stable',
    replaces: {
      'charts.helm.sh': '$host',
    },
  },
  incubator: {
    url: 'https://charts.helm.sh/incubator',
    replaces: {
      'charts.helm.sh': '$host',
    },
  },
  grafana: {
    url: 'https://grafana.github.io/helm-charts',
    replaces: {
      'github.com': 'hub.fastgit.org',
    },
  },
  prometheus: {
    url: 'https://prometheus-community.github.io/helm-charts',
    replaces: {
      'prometheus-community.github.io/helm-charts': '$host/prometheus',
    },
  },
  'k8s-at-home': {
    url: 'https://k8s-at-home.com/charts/',
    replaces: {
      'github.com': 'hub.fastgit.org',
    },
  },
}

Usage example

# helm usage
helm repo add stable https://${cloudflare_worker_route}/stable
helm search repo stable/zetcd

# curl test
curl https://${cloudflare_worker_route}/stable/index.yaml
curl https://${cloudflare_worker_route}/stable/packages/zetcd-0.1.2.tgz
from  https://github.com/ciiiii/cloudflare-helm-proxy

vercel-docker-proxy

 A docker registry proxy run on vercel. 

from  https://github.com/ciiiii/vercel-docker-proxy

moltworker

 Run OpenClaw, (formerly Moltbot, formerly Clawdbot) on Cloudflare Workers 

 https://blog.cloudflare.com/moltworker-self-hosted-ai-agent/

 

OpenClaw on Cloudflare Workers

Run OpenClaw (formerly Moltbot, formerly Clawdbot) personal AI assistant in a Cloudflare Sandbox.

moltworker architecture

Experimental: This is a proof of concept demonstrating that OpenClaw can run in Cloudflare Sandbox. It is not officially supported and may break without notice. Use at your own risk.

Deploy to Cloudflare

Requirements

The following Cloudflare features used by this project have free tiers:

  • Cloudflare Access (authentication)
  • Browser Rendering (for browser navigation)
  • AI Gateway (optional, for API routing/analytics)
  • R2 Storage (optional, for persistence)

Container Cost Estimate

This project uses a standard-1 Cloudflare Container instance (1/2 vCPU, 4 GiB memory, 8 GB disk). Below are approximate monthly costs assuming the container runs 24/7, based on Cloudflare Containers pricing:

Resource Provisioned Monthly Usage Included Free Overage Approx. Cost
Memory 4 GiB 2,920 GiB-hrs 25 GiB-hrs 2,895 GiB-hrs ~$26/mo
CPU (at ~10% utilization) 1/2 vCPU ~2,190 vCPU-min 375 vCPU-min ~1,815 vCPU-min ~$2/mo
Disk 8 GB 5,840 GB-hrs 200 GB-hrs 5,640 GB-hrs ~$1.50/mo
Workers Paid plan



$5/mo
Total



~$34.50/mo

Notes:

  • CPU is billed on active usage only, not provisioned capacity. The 10% utilization estimate is a rough baseline for a lightly-used personal assistant; your actual cost will vary with usage.
  • Memory and disk are billed on provisioned capacity for the full time the container is running.
  • To reduce costs, configure SANDBOX_SLEEP_AFTER (e.g., 10m) so the container sleeps when idle. A container that only runs 4 hours/day would cost roughly ~$5-6/mo in compute on top of the $5 plan fee.
  • Network egress, Workers/Durable Objects requests, and logs are additional but typically minimal for personal use.
  • See the instance types table for other options (e.g., lite at 256 MiB/$0.50/mo memory or standard-4 at 12 GiB for heavier workloads).

What is OpenClaw?

OpenClaw (formerly Moltbot, formerly Clawdbot) is a personal AI assistant with a gateway architecture that connects to multiple chat platforms. Key features:

  • Control UI - Web-based chat interface at the gateway
  • Multi-channel support - Telegram, Discord, Slack
  • Device pairing - Secure DM authentication requiring explicit approval
  • Persistent conversations - Chat history and context across sessions
  • Agent runtime - Extensible AI capabilities with workspace and skills

This project packages OpenClaw to run in a Cloudflare Sandbox container, providing a fully managed, always-on deployment without needing to self-host. Optional R2 storage enables persistence across container restarts.

Architecture

moltworker architecture

Quick Start

Cloudflare Sandboxes are available on the Workers Paid plan.

# Install dependencies
npm install

# Set your API key (direct Anthropic access)
npx wrangler secret put ANTHROPIC_API_KEY

# Or use Cloudflare AI Gateway instead (see "Optional: Cloudflare AI Gateway" below)
# npx wrangler secret put CLOUDFLARE_AI_GATEWAY_API_KEY
# npx wrangler secret put CF_AI_GATEWAY_ACCOUNT_ID
# npx wrangler secret put CF_AI_GATEWAY_GATEWAY_ID

# Generate and set a gateway token (required for remote access)
# Save this token - you'll need it to access the Control UI
export MOLTBOT_GATEWAY_TOKEN=$(openssl rand -hex 32)
echo "Your gateway token: $MOLTBOT_GATEWAY_TOKEN"
echo "$MOLTBOT_GATEWAY_TOKEN" | npx wrangler secret put MOLTBOT_GATEWAY_TOKEN

# Deploy
npm run deploy

After deploying, open the Control UI with your token:

https://your-worker.workers.dev/?token=YOUR_GATEWAY_TOKEN

Replace your-worker with your actual worker subdomain and YOUR_GATEWAY_TOKEN with the token you generated above.

Note: The first request may take 1-2 minutes while the container starts.

Important: You will not be able to use the Control UI until you complete the following steps. You MUST:

  1. Set up Cloudflare Access to protect the admin UI
  2. Pair your device via the admin UI at /_admin/

You'll also likely want to enable R2 storage so your paired devices and conversation history persist across container restarts (optional but recommended).

Setting Up the Admin UI

To use the admin UI at /_admin/ for device management, you need to:

  1. Enable Cloudflare Access on your worker
  2. Set the Access secrets so the worker can validate JWTs

1. Enable Cloudflare Access on workers.dev

The easiest way to protect your worker is using the built-in Cloudflare Access integration for workers.dev:

  1. Go to the Workers & Pages dashboard
  2. Select your Worker (e.g., moltbot-sandbox)
  3. In Settings, under Domains & Routes, in the workers.dev row, click the meatballs menu (...)
  4. Click Enable Cloudflare Access
  5. Copy the values shown in the dialog (you'll need the AUD tag later). Note: The "Manage Cloudflare Access" link in the dialog may 404 — ignore it.
  6. To configure who can access, go to Zero Trust in the Cloudflare dashboard sidebar → AccessApplications, and find your worker's application:
    • Add your email address to the allow list
    • Or configure other identity providers (Google, GitHub, etc.)
  7. Copy the Application Audience (AUD) tag from the Access application settings. This will be your CF_ACCESS_AUD in Step 2 below

2. Set Access Secrets

After enabling Cloudflare Access, set the secrets so the worker can validate JWTs:

# Your Cloudflare Access team domain (e.g., "myteam.cloudflareaccess.com")
npx wrangler secret put CF_ACCESS_TEAM_DOMAIN

# The Application Audience (AUD) tag from your Access application that you copied in the step above
npx wrangler secret put CF_ACCESS_AUD

You can find your team domain in the Zero Trust Dashboard under Settings > Custom Pages (it's the subdomain before .cloudflareaccess.com).

3. Redeploy

npm run deploy

Now visit /_admin/ and you'll be prompted to authenticate via Cloudflare Access before accessing the admin UI.

Alternative: Manual Access Application

If you prefer more control, you can manually create an Access application:

  1. Go to Cloudflare Zero Trust Dashboard
  2. Navigate to Access > Applications
  3. Create a new Self-hosted application
  4. Set the application domain to your Worker URL (e.g., moltbot-sandbox.your-subdomain.workers.dev)
  5. Add paths to protect: /_admin/*, /api/*, /debug/*
  6. Configure your desired identity providers (e.g., email OTP, Google, GitHub)
  7. Copy the Application Audience (AUD) tag and set the secrets as shown above

Local Development

For local development, create a .dev.vars file with:

DEV_MODE=true               # Skip Cloudflare Access auth + bypass device pairing
DEBUG_ROUTES=true           # Enable /debug/* routes (optional)

Authentication

By default, moltbot uses device pairing for authentication. When a new device (browser, CLI, etc.) connects, it must be approved via the admin UI at /_admin/.

Device Pairing

  1. A device connects to the gateway
  2. The connection is held pending until approved
  3. An admin approves the device via /_admin/
  4. The device is now paired and can connect freely

This is the most secure option as it requires explicit approval for each device.

Gateway Token (Required)

A gateway token is required to access the Control UI when hosted remotely. Pass it as a query parameter:

https://your-worker.workers.dev/?token=YOUR_TOKEN
wss://your-worker.workers.dev/ws?token=YOUR_TOKEN

Note: Even with a valid token, new devices still require approval via the admin UI at /_admin/ (see Device Pairing above).

For local development only, set DEV_MODE=true in .dev.vars to skip Cloudflare Access authentication and enable allowInsecureAuth (bypasses device pairing entirely).

Persistent Storage (R2)

By default, moltbot data (configs, paired devices, conversation history) is lost when the container restarts. To enable persistent storage across sessions, configure R2:

1. Create R2 API Token

  1. Go to R2 > Overview in the Cloudflare Dashboard
  2. Click Manage R2 API Tokens
  3. Create a new token with Object Read & Write permissions
  4. Select the moltbot-data bucket (created automatically on first deploy)
  5. Copy the Access Key ID and Secret Access Key

2. Set Secrets

# R2 Access Key ID
npx wrangler secret put R2_ACCESS_KEY_ID

# R2 Secret Access Key
npx wrangler secret put R2_SECRET_ACCESS_KEY

# Your Cloudflare Account ID
npx wrangler secret put CF_ACCOUNT_ID

To find your Account ID: Go to the Cloudflare Dashboard, click the three dots menu next to your account name, and select "Copy Account ID".

How It Works

R2 storage uses a backup/restore approach for simplicity:

On container startup:

  • If R2 is mounted and contains backup data, it's restored to the moltbot config directory
  • OpenClaw uses its default paths (no special configuration needed)

During operation:

  • A cron job runs every 5 minutes to sync the moltbot config to R2
  • You can also trigger a manual backup from the admin UI at /_admin/

In the admin UI:

  • When R2 is configured, you'll see "Last backup: [timestamp]"
  • Click "Backup Now" to trigger an immediate sync

Without R2 credentials, moltbot still works but uses ephemeral storage (data lost on container restart).

Container Lifecycle

By default, the sandbox container stays alive indefinitely (SANDBOX_SLEEP_AFTER=never). This is recommended because cold starts take 1-2 minutes.

To reduce costs for infrequently used deployments, you can configure the container to sleep after a period of inactivity:

npx wrangler secret put SANDBOX_SLEEP_AFTER
# Enter: 10m (or 1h, 30m, etc.)

When the container sleeps, the next request will trigger a cold start. If you have R2 storage configured, your paired devices and data will persist across restarts.

Admin UI

admin ui

Access the admin UI at /_admin/ to:

  • R2 Storage Status - Shows if R2 is configured, last backup time, and a "Backup Now" button
  • Restart Gateway - Kill and restart the moltbot gateway process
  • Device Pairing - View pending requests, approve devices individually or all at once, view paired devices

The admin UI requires Cloudflare Access authentication (or DEV_MODE=true for local development).

Debug Endpoints

Debug endpoints are available at /debug/* when enabled (requires DEBUG_ROUTES=true and Cloudflare Access):

  • GET /debug/processes - List all container processes
  • GET /debug/logs?id=<process_id> - Get logs for a specific process
  • GET /debug/version - Get container and moltbot version info

Optional: Chat Channels

Telegram

npx wrangler secret put TELEGRAM_BOT_TOKEN
npm run deploy

Discord

npx wrangler secret put DISCORD_BOT_TOKEN
npm run deploy

Slack

npx wrangler secret put SLACK_BOT_TOKEN
npx wrangler secret put SLACK_APP_TOKEN
npm run deploy

Optional: Browser Automation (CDP)

This worker includes a Chrome DevTools Protocol (CDP) shim that enables browser automation capabilities. This allows OpenClaw to control a headless browser for tasks like web scraping, screenshots, and automated testing.

Setup

  1. Set a shared secret for authentication:
npx wrangler secret put CDP_SECRET
# Enter a secure random string
  1. Set your worker's public URL:
npx wrangler secret put WORKER_URL
# Enter: https://your-worker.workers.dev
  1. Redeploy:
npm run deploy

Endpoints

Endpoint Description
GET /cdp/json/version Browser version information
GET /cdp/json/list List available browser targets
GET /cdp/json/new Create a new browser target
WS /cdp/devtools/browser/{id} WebSocket connection for CDP commands

All endpoints require authentication via the ?secret=<CDP_SECRET> query parameter.

Built-in Skills

The container includes pre-installed skills in /root/clawd/skills/:

cloudflare-browser

Browser automation via the CDP shim. Requires CDP_SECRET and WORKER_URL to be set (see Browser Automation above).

Scripts:

  • screenshot.js - Capture a screenshot of a URL
  • video.js - Create a video from multiple URLs
  • cdp-client.js - Reusable CDP client library

Usage:

# Screenshot
node /root/clawd/skills/cloudflare-browser/scripts/screenshot.js https://example.com output.png

# Video from multiple URLs
node /root/clawd/skills/cloudflare-browser/scripts/video.js "https://site1.com,https://site2.com" output.mp4 --scroll

See skills/cloudflare-browser/SKILL.md for full documentation.

Optional: Cloudflare AI Gateway

You can route API requests through Cloudflare AI Gateway for caching, rate limiting, analytics, and cost tracking. OpenClaw has native support for Cloudflare AI Gateway as a first-class provider.

AI Gateway acts as a proxy between OpenClaw and your AI provider (e.g., Anthropic). Requests are sent to https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/anthropic instead of directly to api.anthropic.com, giving you Cloudflare's analytics, caching, and rate limiting. You still need a provider API key (e.g., your Anthropic API key) — the gateway forwards it to the upstream provider.

Setup

  1. Create an AI Gateway in the AI Gateway section of the Cloudflare Dashboard.
  2. Set the three required secrets:
# Your AI provider's API key (e.g., your Anthropic API key).
# This is passed through the gateway to the upstream provider.
npx wrangler secret put CLOUDFLARE_AI_GATEWAY_API_KEY

# Your Cloudflare account ID
npx wrangler secret put CF_AI_GATEWAY_ACCOUNT_ID

# Your AI Gateway ID (from the gateway overview page)
npx wrangler secret put CF_AI_GATEWAY_GATEWAY_ID

All three are required. OpenClaw constructs the gateway URL from the account ID and gateway ID, and passes the API key to the upstream provider through the gateway.

  1. Redeploy:
npm run deploy

When Cloudflare AI Gateway is configured, it takes precedence over direct ANTHROPIC_API_KEY or OPENAI_API_KEY.

Choosing a Model

By default, AI Gateway uses Anthropic's Claude Sonnet 4.5. To use a different model or provider, set CF_AI_GATEWAY_MODEL with the format provider/model-id:

npx wrangler secret put CF_AI_GATEWAY_MODEL
# Enter: workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast

This works with any AI Gateway provider:

Provider Example CF_AI_GATEWAY_MODEL value API key is...
Workers AI workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast Cloudflare API token
OpenAI openai/gpt-4o OpenAI API key
Anthropic anthropic/claude-sonnet-4-5 Anthropic API key
Groq groq/llama-3.3-70b Groq API key

Note: CLOUDFLARE_AI_GATEWAY_API_KEY must match the provider you're using — it's your provider's API key, forwarded through the gateway. You can only use one provider at a time through the gateway. For multiple providers, use direct keys (ANTHROPIC_API_KEY, OPENAI_API_KEY) alongside the gateway config.

Workers AI with Unified Billing

With Unified Billing, you can use Workers AI models without a separate provider API key — Cloudflare bills you directly. Set CLOUDFLARE_AI_GATEWAY_API_KEY to your AI Gateway authentication token (the cf-aig-authorization token).

Legacy AI Gateway Configuration

The previous AI_GATEWAY_API_KEY + AI_GATEWAY_BASE_URL approach is still supported for backward compatibility but is deprecated in favor of the native configuration above.

All Secrets Reference

Secret Required Description
CLOUDFLARE_AI_GATEWAY_API_KEY Yes* Your AI provider's API key, passed through the gateway (e.g., your Anthropic API key). Requires CF_AI_GATEWAY_ACCOUNT_ID and CF_AI_GATEWAY_GATEWAY_ID
CF_AI_GATEWAY_ACCOUNT_ID Yes* Your Cloudflare account ID (used to construct the gateway URL)
CF_AI_GATEWAY_GATEWAY_ID Yes* Your AI Gateway ID (used to construct the gateway URL)
CF_AI_GATEWAY_MODEL No Override default model: provider/model-id (e.g. workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast). See Choosing a Model
ANTHROPIC_API_KEY Yes* Direct Anthropic API key (alternative to AI Gateway)
ANTHROPIC_BASE_URL No Direct Anthropic API base URL
OPENAI_API_KEY No OpenAI API key (alternative provider)
AI_GATEWAY_API_KEY No Legacy AI Gateway API key (deprecated, use CLOUDFLARE_AI_GATEWAY_API_KEY instead)
AI_GATEWAY_BASE_URL No Legacy AI Gateway endpoint URL (deprecated)
CF_ACCESS_TEAM_DOMAIN Yes* Cloudflare Access team domain (required for admin UI)
CF_ACCESS_AUD Yes* Cloudflare Access application audience (required for admin UI)
MOLTBOT_GATEWAY_TOKEN Yes Gateway token for authentication (pass via ?token= query param)
DEV_MODE No Set to true to skip CF Access auth + device pairing (local dev only)
DEBUG_ROUTES No Set to true to enable /debug/* routes
SANDBOX_SLEEP_AFTER No Container sleep timeout: never (default) or duration like 10m, 1h
R2_ACCESS_KEY_ID No R2 access key for persistent storage
R2_SECRET_ACCESS_KEY No R2 secret key for persistent storage
CF_ACCOUNT_ID No Cloudflare account ID (required for R2 storage)
TELEGRAM_BOT_TOKEN No Telegram bot token
TELEGRAM_DM_POLICY No Telegram DM policy: pairing (default) or open
DISCORD_BOT_TOKEN No Discord bot token
DISCORD_DM_POLICY No Discord DM policy: pairing (default) or open
SLACK_BOT_TOKEN No Slack bot token
SLACK_APP_TOKEN No Slack app token
CDP_SECRET No Shared secret for CDP endpoint authentication (see Browser Automation)
WORKER_URL No Public URL of the worker (required for CDP)

Security Considerations

Authentication Layers

OpenClaw in Cloudflare Sandbox uses multiple authentication layers:

  1. Cloudflare Access - Protects admin routes (/_admin/, /api/*, /debug/*). Only authenticated users can manage devices.

  2. Gateway Token - Required to access the Control UI. Pass via ?token= query parameter. Keep this secret.

  3. Device Pairing - Each device (browser, CLI, chat platform DM) must be explicitly approved via the admin UI before it can interact with the assistant. This is the default "pairing" DM policy.

Troubleshooting

npm run dev fails with an Unauthorized error: You need to enable Cloudflare Containers in the Containers dashboard

Gateway fails to start: Check npx wrangler secret list and npx wrangler tail

Config changes not working: Edit the # Build cache bust: comment in Dockerfile and redeploy

Slow first request: Cold starts take 1-2 minutes. Subsequent requests are faster.

R2 not mounting: Check that all three R2 secrets are set (R2_ACCESS_KEY_ID, R2_SECRET_ACCESS_KEY, CF_ACCOUNT_ID). Note: R2 mounting only works in production, not with wrangler dev.

Access denied on admin routes: Ensure CF_ACCESS_TEAM_DOMAIN and CF_ACCESS_AUD are set, and that your Cloudflare Access application is configured correctly.

Devices not appearing in admin UI: Device list commands take 10-15 seconds due to WebSocket connection overhead. Wait and refresh.

WebSocket issues in local development: wrangler dev has known limitations with WebSocket proxying through the sandbox. HTTP requests work but WebSocket connections may fail. Deploy to Cloudflare for full functionality.

Known Issues

Windows: Gateway fails to start with exit code 126 (permission denied)

On Windows, Git may check out shell scripts with CRLF line endings instead of LF. This causes start-openclaw.sh to fail with exit code 126 inside the Linux container. Ensure your repository uses LF line endings — configure Git with git config --global core.autocrlf input or add a .gitattributes file with * text=auto eol=lf. See #64 for details.

Links

from  https://github.com/cloudflare/moltworker

ks-devops

 

This is a cloud-native application that focuses on the DevOps area.

kubesphere.io/devops/

KubeSphere DevOps integrates popular CI/CD tools, provides CI/CD Pipelines based on Jenkins, offers automation toolkits including Binary-to-Image (B2I) and Source-to-Image (S2I), and boosts continuous delivery across Kubernetes clusters.

With the container orchestration capability of Kubernetes, KubeSphere DevOps scales Jenkins Agents dynamically, improves CI/CD workflow efficiency, and helps organizations accelerate the time to market for their products.

Features

  • Out-of-the-Box CI/CD Pipelines
  • Built-in Automation Toolkits for DevOps with Kubernetes
  • Use Jenkins Pipelines to Implement DevOps on Top of Kubernetes
  • Manage Pipelines via CLI

Get Started

Quick Start

  • Install KubeSphere via KubeKey (or the methods described here).

    kk create cluster --with-kubesphere
  • Enable DevOps application

    kubectl patch -nkubesphere-system cc ks-installer --type=json -p='[{"op": "replace", "path": "/spec/devops/enabled", "value": true}]'

For more information, refer to the documentation.

Next Steps

  • A Separate Front-End Project of KS-DevOps
  • Auth Support
    • OIDC support as a default provider

Communication Channels

from  https://github.com/kubesphere/ks-devops

Proxy-Provider-Converter


一个把 Clash/Surge 订阅转换成 Proxy Provider(Clash)或 External Group(Surge)的工具。

访问地址:https://proxy-provider-converter.vercel.app

它能帮你做什么

  • 拆分订阅为节点列表:将整个 Clash 订阅转换为可复用的节点清单,方便在配置文件中引用。
  • 合并多个订阅:汇总多个订阅的节点,实现统一管理。
  • 兼容 Clash 与 Surge:一键生成 Clash 的 Proxy Provider 或 Surge 的 External Group。
  • 订阅与规则解耦:节点来源可使用他人订阅(通过 Provider/External Group 引用),规则策略可在配置中自由编写和维护。

快速使用

  1. 打开网站:https://proxy-provider-converter.vercel.app
  2. 粘贴你的 Clash 或 Surge 订阅链接
  3. 选择输出类型:Clash Proxy Provider 或 Surge External Group
  4. 点击转换,复制结果到你的配置文件中即可

自己部署

使用公共站点时,站点拥有者可能看到你的订阅地址;如不希望暴露,可按下列步骤零成本自建你的专属转换工具。

前提:需要一个 GitHub 账号。

  1. 点右上角 Fork,把项目复制到你的 GitHub
  2. 登录 Vercel 并关联 GitHub
  3. 选择 New Project → 找到 proxy-provider-converter → Import → 选择账号 → Deploy
  4. 部署完成后,点 Visit 访问你的专属工具

来自 https://github.com/qier222/proxy-provider-converter