Total Pageviews

Tuesday, 17 February 2026

ndjp.net, 日本可解析免费二级域名



NAKN Internet 下属提供后缀为 ndjp.net 的免费二级域名,可以设置A、AAAA、TXT、CNAME、ALIAS记录。所有服务器和数据均位于日本,完全免费使用没有广告。支持ddns动态解析。

网址:https://ndjp.net/
新版:https://manage.ndjp.net/

注册简单,什么都不用填写。登录框的人机验证通过之后,点击左侧开通按钮即可注册账号。

点击之后会生成随机的用户名和密码,只显示这一次,要记好。

添加域名
进入管理面板,会看到已经生成了一个和用户名一样的子域名。我们可以在下面添加新的域名。

域名解析
默认自带一条A记录,可删除、修改。还可以设置A、AAAA、TXT、CNAME、ALIAS记录、添加多级域名。还可以通过api动态解析。

个人信息里面可以补全资料,填写邮箱、修改密码等操作。新版已有找回密码的功能:https://manage.ndjp.net/forgot-password

ZeroClaw

 Fast, small, and fully autonomous AI assistant infrrustastructure.

Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.
⚡️ Runs on $10 hardware with <5MB RAM: That's 99% less memory than OpenClaw and 98% cheaper than a Mac mini!

License: Apache 2.0 Contributors Buy Me a Coffee

Fast, small, and fully autonomous AI assistant infrastructure — deploy anywhere, swap anything.

~3.4MB binary · <10ms startup · 1,017 tests · 22+ providers · 8 traits · Pluggable everything

✨ Features

  • 🏎️ Ultra-Lightweight: <5MB Memory footprint — 99% smaller than OpenClaw core.
  • 💰 Minimal Cost: Efficient enough to run on $10 Hardware — 98% cheaper than a Mac mini.
  • Lightning Fast: 400X Faster startup time, boot in <10ms (under 1s even on 0.6GHz cores).
  • 🌍 True Portability: Single self-contained binary across ARM, x86, and RISC-V.

Why teams pick ZeroClaw

  • Lean by default: small Rust binary, fast startup, low memory footprint.
  • Secure by design: pairing, strict sandboxing, explicit allowlists, workspace scoping.
  • Fully swappable: core systems are traits (providers, channels, tools, memory, tunnels).
  • No lock-in: OpenAI-compatible provider support + pluggable custom endpoints.

Benchmark Snapshot (ZeroClaw vs OpenClaw)

Local machine quick benchmark (macOS arm64, Feb 2026) normalized for 0.8GHz edge hardware.


OpenClaw NanoBot PicoClaw ZeroClaw 🦀
Language TypeScript Python Go Rust
RAM > 1GB > 100MB < 10MB < 5MB
Startup (0.8GHz core) > 500s > 30s < 1s < 10ms
Binary Size ~28MB (dist) N/A (Scripts) ~8MB 3.4 MB
Cost Mac Mini $599 Linux SBC ~$50 Linux Board $10 Any hardware $10

Notes: ZeroClaw results measured with /usr/bin/time -l on release builds. OpenClaw requires Node.js runtime (~390MB overhead). PicoClaw and ZeroClaw are static binaries.

ZeroClaw vs OpenClaw Comparison

Reproduce ZeroClaw numbers locally:

cargo build --release
ls -lh target/release/zeroclaw

/usr/bin/time -l target/release/zeroclaw --help
/usr/bin/time -l target/release/zeroclaw status

Prerequisites

Windows





Linux / macOS







Quick Start

git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
cargo build --release --locked
cargo install --path . --force --locked

# Quick setup (no prompts)
zeroclaw onboard --api-key sk-... --provider openrouter

# Or interactive wizard
zeroclaw onboard --interactive

# Or quickly repair channels/allowlists only
zeroclaw onboard --channels-only

# Chat
zeroclaw agent -m "Hello, ZeroClaw!"

# Interactive mode
zeroclaw agent

# Start the gateway (webhook server)
zeroclaw gateway                # default: 127.0.0.1:8080
zeroclaw gateway --port 0       # random port (security hardened)

# Start full autonomous runtime
zeroclaw daemon

# Check status
zeroclaw status

# Run system diagnostics
zeroclaw doctor

# Check channel health
zeroclaw channel doctor

# Get integration setup details
zeroclaw integrations info Telegram

# Manage background service
zeroclaw service install
zeroclaw service status

# Migrate memory from OpenClaw (safe preview first)
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw

Dev fallback (no global install): prefix commands with cargo run --release -- (example: cargo run --release -- status).

Architecture

Every subsystem is a trait — swap implementations with a config change, zero code changes.

ZeroClaw Architecture

Subsystem Trait Ships with Extend
AI Models Provider 22+ providers (OpenRouter, Anthropic, OpenAI, Ollama, Venice, Groq, Mistral, xAI, DeepSeek, Together, Fireworks, Perplexity, Cohere, Bedrock, etc.) custom:https://your-api.com — any OpenAI-compatible API
Channels Channel CLI, Telegram, Discord, Slack, iMessage, Matrix, WhatsApp, Webhook Any messaging API
Memory Memory SQLite with hybrid search (FTS5 + vector cosine similarity), Lucid bridge (CLI sync + SQLite fallback), Markdown Any persistence backend
Tools Tool shell, file_read, file_write, memory_store, memory_recall, memory_forget, browser_open (Brave + allowlist), browser (agent-browser / rust-native), composio (optional) Any capability
Observability Observer Noop, Log, Multi Prometheus, OTel
Runtime RuntimeAdapter Native, Docker (sandboxed) WASM (planned; unsupported kinds fail fast)
Security SecurityPolicy Gateway pairing, sandbox, allowlists, rate limits, filesystem scoping, encrypted secrets
Identity IdentityConfig OpenClaw (markdown), AIEOS v1.1 (JSON) Any identity format
Tunnel Tunnel None, Cloudflare, Tailscale, ngrok, Custom Any tunnel binary
Heartbeat Engine HEARTBEAT.md periodic tasks
Skills Loader TOML manifests + SKILL.md instructions Community skill packs
Integrations Registry 50+ integrations across 9 categories Plugin system

Runtime support (current)

  • ✅ Supported today: runtime.kind = "native" or runtime.kind = "docker"
  • 🚧 Planned, not implemented yet: WASM / edge runtimes

When an unsupported runtime.kind is configured, ZeroClaw now exits with a clear error instead of silently falling back to native.

Memory System (Full-Stack Search Engine)

All custom, zero external dependencies — no Pinecone, no Elasticsearch, no LangChain:

Layer Implementation
Vector DB Embeddings stored as BLOB in SQLite, cosine similarity search
Keyword Search FTS5 virtual tables with BM25 scoring
Hybrid Merge Custom weighted merge function (vector.rs)
Embeddings EmbeddingProvider trait — OpenAI, custom URL, or noop
Chunking Line-based markdown chunker with heading preservation
Caching SQLite embedding_cache table with LRU eviction
Safe Reindex Rebuild FTS5 + re-embed missing vectors atomically

The agent automatically recalls, saves, and manages memory via tools.

[memory]
backend = "sqlite"          # "sqlite", "lucid", "markdown", "none"
auto_save = true
embedding_provider = "openai"
vector_weight = 0.7
keyword_weight = 0.3

# backend = "none" uses an explicit no-op memory backend (no persistence)

# Optional for backend = "lucid"
# ZEROCLAW_LUCID_CMD=/usr/local/bin/lucid   # default: lucid
# ZEROCLAW_LUCID_BUDGET=200                 # default: 200
# ZEROCLAW_LUCID_LOCAL_HIT_THRESHOLD=3      # local hit count to skip external recall
# ZEROCLAW_LUCID_RECALL_TIMEOUT_MS=120      # low-latency budget for lucid context recall
# ZEROCLAW_LUCID_STORE_TIMEOUT_MS=800        # async sync timeout for lucid store
# ZEROCLAW_LUCID_FAILURE_COOLDOWN_MS=15000   # cooldown after lucid failure to avoid repeated slow attempts

Security

ZeroClaw enforces security at every layer — not just the sandbox. It passes all items from the community security checklist.

Security Checklist

# Item Status How
1 Gateway not publicly exposed Binds 127.0.0.1 by default. Refuses 0.0.0.0 without tunnel or explicit allow_public_bind = true.
2 Pairing required 6-digit one-time code on startup. Exchange via POST /pair for bearer token. All /webhook requests require Authorization: Bearer <token>.
3 Filesystem scoped (no /) workspace_only = true by default. 14 system dirs + 4 sensitive dotfiles blocked. Null byte injection blocked. Symlink escape detection via canonicalization + resolved-path workspace checks in file read/write tools.
4 Access via tunnel only Gateway refuses public bind without active tunnel. Supports Tailscale, Cloudflare, ngrok, or any custom tunnel.

Run your own nmap: nmap -p 1-65535 <your-host> — ZeroClaw binds to localhost only, so nothing is exposed unless you explicitly configure a tunnel.

Channel allowlists (Telegram / Discord / Slack)

Inbound sender policy is now consistent:

  • Empty allowlist = deny all inbound messages
  • "*" = allow all (explicit opt-in)
  • Otherwise = exact-match allowlist

This keeps accidental exposure low by default.

Recommended low-friction setup (secure + fast):

  • Telegram: allowlist your own @username (without @) and/or your numeric Telegram user ID.
  • Discord: allowlist your own Discord user ID.
  • Slack: allowlist your own Slack member ID (usually starts with U).
  • Use "*" only for temporary open testing.

If you're not sure which identity to use:

  1. Start channels and send one message to your bot.
  2. Read the warning log to see the exact sender identity.
  3. Add that value to the allowlist and rerun channels-only setup.

If you hit authorization warnings in logs (for example: ignoring message from unauthorized user), rerun channel setup only:

zeroclaw onboard --channels-only

WhatsApp Business Cloud API Setup

WhatsApp uses Meta's Cloud API with webhooks (push-based, not polling):

  1. Create a Meta Business App:

  2. Get your credentials:

    • Access Token: From WhatsApp → API Setup → Generate token (or create a System User for permanent tokens)
    • Phone Number ID: From WhatsApp → API Setup → Phone number ID
    • Verify Token: You define this (any random string) — Meta will send it back during webhook verification
  3. Configure ZeroClaw:

    [channels_config.whatsapp]
    access_token = "EAABx..."
    phone_number_id = "123456789012345"
    verify_token = "my-secret-verify-token"
    allowed_numbers = ["+1234567890"]  # E.164 format, or ["*"] for all
  4. Start the gateway with a tunnel:

    zeroclaw gateway --port 8080

    WhatsApp requires HTTPS, so use a tunnel (ngrok, Cloudflare, Tailscale Funnel).

  5. Configure Meta webhook:

    • In Meta Developer Console → WhatsApp → Configuration → Webhook
    • Callback URL: https://your-tunnel-url/whatsapp
    • Verify Token: Same as your verify_token in config
    • Subscribe to messages field
  6. Test: Send a message to your WhatsApp Business number — ZeroClaw will respond via the LLM.

Configuration

Config: ~/.zeroclaw/config.toml (created by onboard)

api_key = "sk-..."
default_provider = "openrouter"
default_model = "anthropic/claude-sonnet-4-20250514"
default_temperature = 0.7

[memory]
backend = "sqlite"              # "sqlite", "lucid", "markdown", "none"
auto_save = true
embedding_provider = "openai"   # "openai", "noop"
vector_weight = 0.7
keyword_weight = 0.3

# backend = "none" disables persistent memory via no-op backend

[gateway]
require_pairing = true          # require pairing code on first connect
allow_public_bind = false       # refuse 0.0.0.0 without tunnel

[autonomy]
level = "supervised"            # "readonly", "supervised", "full" (default: supervised)
workspace_only = true           # default: true — scoped to workspace
allowed_commands = ["git", "npm", "cargo", "ls", "cat", "grep"]
forbidden_paths = ["/etc", "/root", "/proc", "/sys", "~/.ssh", "~/.gnupg", "~/.aws"]

[runtime]
kind = "native"                # "native" or "docker"

[runtime.docker]
image = "alpine:3.20"          # container image for shell execution
network = "none"               # docker network mode ("none", "bridge", etc.)
memory_limit_mb = 512          # optional memory limit in MB
cpu_limit = 1.0                # optional CPU limit
read_only_rootfs = true        # mount root filesystem as read-only
mount_workspace = true         # mount workspace into /workspace
allowed_workspace_roots = []   # optional allowlist for workspace mount validation

[heartbeat]
enabled = false
interval_minutes = 30

[tunnel]
provider = "none"               # "none", "cloudflare", "tailscale", "ngrok", "custom"

[secrets]
encrypt = true                  # API keys encrypted with local key file

[browser]
enabled = false                        # opt-in browser_open + browser tools
allowed_domains = ["docs.rs"]         # required when browser is enabled
backend = "agent_browser"             # "agent_browser" (default), "rust_native", "computer_use", "auto"
native_headless = true                 # applies when backend uses rust-native
native_webdriver_url = "http://127.0.0.1:9515" # WebDriver endpoint (chromedriver/selenium)
# native_chrome_path = "/usr/bin/chromium"  # optional explicit browser binary for driver

[browser.computer_use]
endpoint = "http://127.0.0.1:8787/v1/actions" # computer-use sidecar HTTP endpoint
timeout_ms = 15000                    # per-action timeout
allow_remote_endpoint = false         # secure default: only private/localhost endpoint
window_allowlist = []                 # optional window title/process allowlist hints
# api_key = "..."                    # optional bearer token for sidecar
# max_coordinate_x = 3840             # optional coordinate guardrail
# max_coordinate_y = 2160             # optional coordinate guardrail

# Rust-native backend build flag:
# cargo build --release --features browser-native
# Ensure a WebDriver server is running, e.g. chromedriver --port=9515

# Computer-use sidecar contract (MVP)
# POST browser.computer_use.endpoint
# Request: {
#   "action": "mouse_click",
#   "params": {"x": 640, "y": 360, "button": "left"},
#   "policy": {"allowed_domains": [...], "window_allowlist": [...], "max_coordinate_x": 3840, "max_coordinate_y": 2160},
#   "metadata": {"session_name": "...", "source": "zeroclaw.browser", "version": "..."}
# }
# Response: {"success": true, "data": {...}} or {"success": false, "error": "..."}

[composio]
enabled = false                 # opt-in: 1000+ OAuth apps via composio.dev
# api_key = "cmp_..."          # optional: stored encrypted when [secrets].encrypt = true
entity_id = "default"         # default user_id for Composio tool calls

[identity]
format = "openclaw"             # "openclaw" (default, markdown files) or "aieos" (JSON)
# aieos_path = "identity.json"  # path to AIEOS JSON file (relative to workspace or absolute)
# aieos_inline = '{"identity":{"names":{"first":"Nova"}}}'  # inline AIEOS JSON

Python Companion Package (zeroclaw-tools)

For LLM providers with inconsistent native tool calling (e.g., GLM-5/Zhipu), ZeroClaw ships a Python companion package with LangGraph-based tool calling for guaranteed consistency:

pip install zeroclaw-tools
from zeroclaw_tools import create_agent, shell, file_read
from langchain_core.messages import HumanMessage

# Works with any OpenAI-compatible provider
agent = create_agent(
    tools=[shell, file_read],
    model="glm-5",
    api_key="your-key",
    base_url="https://api.z.ai/api/coding/paas/v4"
)

result = await agent.ainvoke({
    "messages": [HumanMessage(content="List files in /tmp")]
})
print(result["messages"][-1].content)

Why use it:

  • Consistent tool calling across all providers (even those with poor native support)
  • Automatic tool loop — keeps calling tools until the task is complete
  • Easy extensibility — add custom tools with @tool decorator
  • Discord bot integration included (Telegram planned)

See python/README.md for full documentation.

Identity System (AIEOS Support)

ZeroClaw supports identity-agnostic AI personas through two formats:

OpenClaw (Default)

Traditional markdown files in your workspace:

  • IDENTITY.md — Who the agent is
  • SOUL.md — Core personality and values
  • USER.md — Who the agent is helping
  • AGENTS.md — Behavior guidelines

AIEOS (AI Entity Object Specification)

AIEOS is a standardization framework for portable AI identity. ZeroClaw supports AIEOS v1.1 JSON payloads, allowing you to:

  • Import identities from the AIEOS ecosystem
  • Export identities to other AIEOS-compatible systems
  • Maintain behavioral integrity across different AI models

Enable AIEOS

[identity]
format = "aieos"
aieos_path = "identity.json"  # relative to workspace or absolute path

Or inline JSON:

[identity]
format = "aieos"
aieos_inline = '''
{
  "identity": {
    "names": { "first": "Nova", "nickname": "N" }
  },
  "psychology": {
    "neural_matrix": { "creativity": 0.9, "logic": 0.8 },
    "traits": { "mbti": "ENTP" },
    "moral_compass": { "alignment": "Chaotic Good" }
  },
  "linguistics": {
    "text_style": { "formality_level": 0.2, "slang_usage": true }
  },
  "motivations": {
    "core_drive": "Push boundaries and explore possibilities"
  }
}
'''

AIEOS Schema Sections

Section Description
identity Names, bio, origin, residence
psychology Neural matrix (cognitive weights), MBTI, OCEAN, moral compass
linguistics Text style, formality, catchphrases, forbidden words
motivations Core drive, short/long-term goals, fears
capabilities Skills and tools the agent can access
physicality Visual descriptors for image generation
history Origin story, education, occupation
interests Hobbies, favorites, lifestyle

See aieos.org for the full schema and live examples.

Gateway API

Endpoint Method Auth Description
/health GET None Health check (always public, no secrets leaked)
/pair POST X-Pairing-Code header Exchange one-time code for bearer token
/webhook POST Authorization: Bearer <token> Send message: {"message": "your prompt"}
/whatsapp GET Query params Meta webhook verification (hub.mode, hub.verify_token, hub.challenge)
/whatsapp POST None (Meta signature) WhatsApp incoming message webhook

Commands

Command Description
onboard Quick setup (default)
onboard --interactive Full interactive 7-step wizard
onboard --channels-only Reconfigure channels/allowlists only (fast repair flow)
agent -m "..." Single message mode
agent Interactive chat mode
gateway Start webhook server (default: 127.0.0.1:8080)
gateway --port 0 Random port mode
daemon Start long-running autonomous runtime
service install/start/stop/status/uninstall Manage user-level background service
doctor Diagnose daemon/scheduler/channel freshness
status Show full system status
channel doctor Run health checks for configured channels
integrations info <name> Show setup/status details for one integration

Development

cargo build              # Dev build
cargo build --release    # Release build (codegen-units=1, works on all devices including Raspberry Pi)
cargo build --profile release-fast    # Faster build (codegen-units=8, requires 16GB+ RAM)
cargo test               # 1,017 tests
cargo clippy             # Lint (0 warnings)
cargo fmt                # Format

# Run the SQLite vs Markdown benchmark
cargo test --test memory_comparison -- --nocapture

Pre-push hook

A git hook runs cargo fmt --check, cargo clippy -- -D warnings, and cargo test before every push. Enable it once:

git config core.hooksPath .githooks

Build troubleshooting (Linux OpenSSL errors)

If you see an openssl-sys build error, sync dependencies and rebuild with the repository lockfile:

git pull
cargo build --release --locked
cargo install --path . --force --locked

ZeroClaw is configured to use rustls for HTTP/TLS dependencies; --locked keeps the transitive graph deterministic on fresh environments.

To skip the hook when you need a quick push during development:

git push --no-verify

Collaboration & Docs

For high-throughput collaboration and consistent reviews:

Support

ZeroClaw is an open-source project maintained with passion. If you find it useful and would like to support its continued development, hardware for testing, and coffee for the maintainer, you can support me here:

Buy Me a Coffee

🙏 Special Thanks

A heartfelt thank you to the communities and institutions that inspire and fuel this open-source work:

  • Harvard University — for fostering intellectual curiosity and pushing the boundaries of what's possible.
  • MIT — for championing open knowledge, open source, and the belief that technology should be accessible to everyone.
  • Sundai Club — for the community, the energy, and the relentless drive to build things that matter.
  • The World & Beyond 🌍✨ — to every contributor, dreamer, and builder out there making open source a force for good. This is for you.

We're building in the open because the best ideas come from everywhere. If you're reading this, you're part of it. Welcome. 🦀❤️

License

Apache 2.0 — see LICENSE and NOTICE for contributor attribution

Contributing

See CONTRIBUTING.md. Implement a trait, submit a PR:

  • CI workflow guide: docs/ci-map.md
  • New Providersrc/providers/
  • New Channelsrc/channels/
  • New Observersrc/observability/
  • New Toolsrc/tools/
  • New Memorysrc/memory/
  • New Tunnelsrc/tunnel/
  • New Skill~/.zeroclaw/workspace/skills/<name>/

ZeroClaw — Zero overhead. Zero compromise. Deploy anywhere.

from  https://github.com/zeroclaw-labs/zeroclaw

首次AI艺人专场演唱会,来自中国的虚拟艺人‘六六’

Monday, 16 February 2026

太太万岁

 

-剧中的太太几乎时刻为老公着想。你如果遇到了这样的女人,要珍惜哦!

darwin-xnu

 

Legacy mirror of Darwin Kernel. Replaced by https://github.com/apple-oss-distributions/xnu

 

What is XNU?

XNU kernel is part of the Darwin operating system for use in macOS and iOS operating systems. XNU is an acronym for X is Not Unix. XNU is a hybrid kernel combining the Mach kernel developed at Carnegie Mellon University with components from FreeBSD and a C++ API for writing drivers called IOKit. XNU runs on x86_64 for both single processor and multi-processor configurations.

XNU Source Tree

  • config - configurations for exported apis for supported architecture and platform
  • SETUP - Basic set of tools used for configuring the kernel, versioning and kextsymbol management.
  • EXTERNAL_HEADERS - Headers sourced from other projects to avoid dependency cycles when building. These headers should be regularly synced when source is updated.
  • libkern - C++ IOKit library code for handling of drivers and kexts.
  • libsa - kernel bootstrap code for startup
  • libsyscall - syscall library interface for userspace programs
  • libkdd - source for user library for parsing kernel data like kernel chunked data.
  • makedefs - top level rules and defines for kernel build.
  • osfmk - Mach kernel based subsystems
  • pexpert - Platform specific code like interrupt handling, atomics etc.
  • security - Mandatory Access Check policy interfaces and related implementation.
  • bsd - BSD subsystems code
  • tools - A set of utilities for testing, debugging and profiling kernel.

How to build XNU

Building DEVELOPMENT kernel

The xnu make system can build kernel based on KERNEL_CONFIGS & ARCH_CONFIGS variables as arguments. Here is the syntax:

make SDKROOT=<sdkroot> ARCH_CONFIGS=<arch> KERNEL_CONFIGS=<variant>

Where:

  • <sdkroot>: path to macOS SDK on disk. (defaults to /)
  • <variant>: can be debug, development, release, profile and configures compilation flags and asserts throughout kernel code.
  • <arch> : can be valid arch to build for. (E.g. X86_64)

To build a kernel for the same architecture as running OS, just type

$ make
$ make SDKROOT=macosx.internal

Additionally, there is support for configuring architectures through ARCH_CONFIGS and kernel configurations with KERNEL_CONFIGS.

$ make SDKROOT=macosx.internal ARCH_CONFIGS=X86_64 KERNEL_CONFIGS=DEVELOPMENT
$ make SDKROOT=macosx.internal ARCH_CONFIGS=X86_64 KERNEL_CONFIGS="RELEASE DEVELOPMENT DEBUG"

Note:

  • By default, architecture is set to the build machine architecture, and the default kernel config is set to build for DEVELOPMENT.

This will also create a bootable image, kernel.[config], and a kernel binary with symbols, kernel.[config].unstripped.

To intall the kernel into a DSTROOT, use the install_kernels target:

$ make install_kernels DSTROOT=/tmp/xnu-dst

Hint: For a more satisfying kernel debugging experience, with access to all local variables and arguments, but without all the extra check of the DEBUG kernel, add something like: CFLAGS_DEVELOPMENTARM64="-O0 -g -DKERNEL_STACK_MULTIPLIER=2" CXXFLAGS_DEVELOPMENTARM64="-O0 -g -DKERNEL_STACK_MULTIPLIER=2" to your make command. Replace DEVELOPMENT and ARM64 with the appropriate build and platform.

  • To build with RELEASE kernel configuration

    make KERNEL_CONFIGS=RELEASE SDKROOT=/path/to/SDK
    

Building FAT kernel binary

Define architectures in your environment or when running a make command.

$ make ARCH_CONFIGS="X86_64" exporthdrs all

Other makefile options

  • $ make MAKEJOBS=-j8 # this will use 8 processes during the build. The default is 2x the number of active CPUS.
  • $ make -j8 # the standard command-line option is also accepted
  • $ make -w # trace recursive make invocations. Useful in combination with VERBOSE=YES
  • $ make BUILD_LTO=0 # build without LLVM Link Time Optimization
  • $ make REMOTEBUILD=user@remotehost # perform build on remote host
  • $ make BUILD_JSON_COMPILATION_DATABASE=1 # Build Clang JSON Compilation Database

The XNU build system can optionally output color-formatted build output. To enable this, you can either set the XNU_LOGCOLORS environment variable to y, or you can pass LOGCOLORS=y to the make command.

Debug information formats

By default, a DWARF debug information repository is created during the install phase; this is a "bundle" named kernel.development.<variant>.dSYM To select the older STABS debug information format (where debug information is embedded in the kernel.development.unstripped image), set the BUILD_STABS environment variable.

$ export BUILD_STABS=1
$ make

Building KernelCaches

To test the xnu kernel, you need to build a kernelcache that links the kexts and kernel together into a single bootable image. To build a kernelcache you can use the following mechanisms:

  • Using automatic kernelcache generation with kextd. The kextd daemon keeps watching for changing in /System/Library/Extensions directory. So you can setup new kernel as

    $ cp BUILD/obj/DEVELOPMENT/X86_64/kernel.development /System/Library/Kernels/
    $ touch /System/Library/Extensions
    $ ps -e | grep kextd
    
  • Manually invoking kextcache to build new kernelcache.

    $ kextcache -q -z -a x86_64 -l -n -c /var/tmp/kernelcache.test -K /var/tmp/kernel.test /System/Library/Extensions
    

Running KernelCache on Target machine

The development kernel and iBoot supports configuring boot arguments so that we can safely boot into test kernel and, if things go wrong, safely fall back to previously used kernelcache. Following are the steps to get such a setup:

  1. Create kernel cache using the kextcache command as /kernelcache.test

  2. Copy exiting boot configurations to alternate file

    $ cp /Library/Preferences/SystemConfiguration/com.apple.Boot.plist /next_boot.plist
    
  3. Update the kernelcache and boot-args for your setup

    $ plutil -insert "Kernel Cache" -string "kernelcache.test" /next_boot.plist
    $ plutil -replace "Kernel Flags" -string "debug=0x144 -v kernelsuffix=test " /next_boot.plist
    
  4. Copy the new config to /Library/Preferences/SystemConfiguration/

    $ cp /next_boot.plist /Library/Preferences/SystemConfiguration/boot.plist
    
  5. Bless the volume with new configs.

    $ sudo -n bless  --mount / --setBoot --nextonly --options "config=boot"
    

    The --nextonly flag specifies that use the boot.plist configs only for one boot. So if the kernel panic's you can easily power reboot and recover back to original kernel.

Creating tags and cscope

Set up your build environment and from the top directory, run:

$ make tags     # this will build ctags and etags on a case-sensitive volume, only ctags on case-insensitive
$ make TAGS     # this will build etags
$ make cscope   # this will build cscope database

How to install a new header file from XNU

To install IOKit headers, see additional comments in iokit/IOKit/Makefile.

XNU installs header files at the following locations -

a. $(DSTROOT)/System/Library/Frameworks/Kernel.framework/Headers
b. $(DSTROOT)/System/Library/Frameworks/Kernel.framework/PrivateHeaders
c. $(DSTROOT)/usr/include/
d. $(DSTROOT)/System/DriverKit/usr/include/
e. $(DSTROOT)/System/Library/Frameworks/System.framework/PrivateHeaders

Kernel.framework is used by kernel extensions.
The System.framework and /usr/include are used by user level applications.
/System/DriverKit/usr/include is used by userspace drivers.
The header files in framework's PrivateHeaders are only available for ** Apple Internal Development **.

The directory containing the header file should have a Makefile that creates the list of files that should be installed at different locations. If you are adding the first header file in a directory, you will need to create Makefile similar to xnu/bsd/sys/Makefile.

Add your header file to the correct file list depending on where you want to install it. The default locations where the header files are installed from each file list are -

a. `DATAFILES` : To make header file available in user level -
   `$(DSTROOT)/usr/include`

b. `DRIVERKIT_DATAFILES` : To make header file available to DriverKit userspace drivers -
   `$(DSTROOT)/System/DriverKit/usr/include`

c. `PRIVATE_DATAFILES` : To make header file available to Apple internal in
   user level -
   `$(DSTROOT)/System/Library/Frameworks/System.framework/PrivateHeaders`

d. `KERNELFILES` : To make header file available in kernel level -
   `$(DSTROOT)/System/Library/Frameworks/Kernel.framework/Headers`
   `$(DSTROOT)/System/Library/Frameworks/Kernel.framework/PrivateHeaders`

e. `PRIVATE_KERNELFILES` : To make header file available to Apple internal
   for kernel extensions -
   `$(DSTROOT)/System/Library/Frameworks/Kernel.framework/PrivateHeaders`

The Makefile combines the file lists mentioned above into different install lists which are used by build system to install the header files. There are two types of install lists: machine-dependent and machine-independent. These lists are indicated by the presence of MD and MI in the build setting, respectively. If your header is architecture-specific, then you should use a machine-dependent install list (e.g. INSTALL_MD_LIST). If your header should be installed for all architectures, then you should use a machine-independent install list (e.g. INSTALL_MI_LIST).

If the install list that you are interested does not exist, create it by adding the appropriate file lists. The default install lists, its member file lists and their default location are described below -

a. `INSTALL_MI_LIST` : Installs header file to a location that is available to everyone in user level.
    Locations -
       $(DSTROOT)/usr/include
   Definition -
       INSTALL_MI_LIST = ${DATAFILES}

b. `INSTALL_DRIVERKIT_MI_LIST` : Installs header file to a location that is
    available to DriverKit userspace drivers.
    Locations -
       $(DSTROOT)/System/DriverKit/usr/include
   Definition -
       INSTALL_DRIVERKIT_MI_LIST = ${DRIVERKIT_DATAFILES}

c.  `INSTALL_MI_LCL_LIST` : Installs header file to a location that is available
   for Apple internal in user level.
   Locations -
       $(DSTROOT)/System/Library/Frameworks/System.framework/PrivateHeaders
   Definition -
       INSTALL_MI_LCL_LIST = ${PRIVATE_DATAFILES}

d. `INSTALL_KF_MI_LIST` : Installs header file to location that is available
   to everyone for kernel extensions.
   Locations -
        $(DSTROOT)/System/Library/Frameworks/Kernel.framework/Headers
   Definition -
        INSTALL_KF_MI_LIST = ${KERNELFILES}

e. `INSTALL_KF_MI_LCL_LIST` : Installs header file to location that is
   available for Apple internal for kernel extensions.
   Locations -
        $(DSTROOT)/System/Library/Frameworks/Kernel.framework/PrivateHeaders
   Definition -
        INSTALL_KF_MI_LCL_LIST = ${KERNELFILES} ${PRIVATE_KERNELFILES}

f. `EXPORT_MI_LIST` : Exports header file to all of xnu (bsd/, osfmk/, etc.)
   for compilation only. Does not install anything into the SDK.
   Definition -
        EXPORT_MI_LIST = ${KERNELFILES} ${PRIVATE_KERNELFILES}

g. `INSTALL_MODULEMAP_INCDIR_MI_LIST` : Installs module map file to a
   location that is available to everyone in user level, installing at the
   root of INCDIR.
   Locations -
       $(DSTROOT)/usr/include
   Definition -
       INSTALL_MODULEMAP_INCDIR_MI_LIST = ${MODULEMAP_INCDIR_FILES}

If you want to install the header file in a sub-directory of the paths described in (1), specify the directory name using two variables INSTALL_MI_DIR and EXPORT_MI_DIR as follows -

INSTALL_MI_DIR = dirname
EXPORT_MI_DIR = dirname

A single header file can exist at different locations using the steps mentioned above. However it might not be desirable to make all the code in the header file available at all the locations. For example, you want to export a function only to kernel level but not user level.

You can use C language's pre-processor directive (#ifdef, #endif, #ifndef) to control the text generated before a header file is installed. The kernel only includes the code if the conditional macro is TRUE and strips out code for FALSE conditions from the header file.

Some pre-defined macros and their descriptions are -

a. `PRIVATE` : If defined, enclosed definitions are considered System
Private Interfaces. These are visible within xnu and
exposed in user/kernel headers installed within the AppleInternal
"PrivateHeaders" sections of the System and Kernel frameworks.
b. `KERNEL_PRIVATE` : If defined, enclosed code is available to all of xnu
kernel and Apple internal kernel extensions and omitted from user
headers.
c. `BSD_KERNEL_PRIVATE` : If defined, enclosed code is visible exclusively
within the xnu/bsd module.
d. `MACH_KERNEL_PRIVATE`: If defined, enclosed code is visible exclusively
within the xnu/osfmk module.
e. `XNU_KERNEL_PRIVATE`: If defined, enclosed code is visible exclusively
within xnu.
f. `KERNEL` :  If defined, enclosed code is available within xnu and kernel
   extensions and is not visible in user level header files.  Only the
   header files installed in following paths will have the code -

        $(DSTROOT)/System/Library/Frameworks/Kernel.framework/Headers
        $(DSTROOT)/System/Library/Frameworks/Kernel.framework/PrivateHeaders
g. `DRIVERKIT`: If defined, enclosed code is visible exclusively in the
DriverKit SDK headers used by userspace drivers.

Conditional compilation

xnu offers the following mechanisms for conditionally compiling code:

a. *CPU Characteristics* If the code you are guarding has specific
characterstics that will vary only based on the CPU architecture being
targeted, use this option. Prefer checking for features of the
architecture (e.g. `__LP64__`, `__LITTLE_ENDIAN__`, etc.).
b. *New Features* If the code you are guarding, when taken together,
implements a feature, you should define a new feature in `config/MASTER`
and use the resulting `CONFIG` preprocessor token (e.g. for a feature
named `config_virtual_memory`, check for `#if CONFIG_VIRTUAL_MEMORY`).
This practice ensures that existing features may be brought to other
platforms by simply changing a feature switch.
c. *Existing Features* You can use existing features if your code is
strongly tied to them (e.g. use `SECURE_KERNEL` if your code implements
new functionality that is exclusively relevant to the trusted kernel and
updates the definition/understanding of what being a trusted kernel means).

It is recommended that you avoid compiling based on the target platform. xnu does not define the platform macros from TargetConditionals.h (TARGET_OS_OSX, TARGET_OS_IOS, etc.).

There is a deprecated TARGET_OS_EMBEDDED macro, but this should be avoided as it is in general too broad a definition for most functionality. Please refer to TargetConditionals.h for a full picture.

How to add a new syscall

Testing the kernel

XNU kernel has multiple mechanisms for testing.

  • Assertions - The DEVELOPMENT and DEBUG kernel configs are compiled with assertions enabled. This allows developers to easily test invariants and conditions.

  • XNU Power On Self Tests (XNUPOST): The XNUPOST config allows for building the kernel with basic set of test functions that are run before first user space process is launched. Since XNU is hybrid between MACH and BSD, we have two locations where tests can be added.

    xnu/osfmk/tests/     # For testing mach based kernel structures and apis.
    bsd/tests/           # For testing BSD interfaces.
    

    Please follow the documentation at osfmk/tests/README.md

  • User level tests: The tools/tests/ directory holds all the tests that verify syscalls and other features of the xnu kernel. The make target xnu_tests can be used to build all the tests supported.

    $ make RC_ProjectName=xnu_tests SDKROOT=/path/to/SDK
    

    These tests are individual programs that can be run from Terminal and report tests status by means of std posix exit codes (0 -> success) and/or stdout. Please read detailed documentation in tools/tests/unit_tests/README.md

Kernel data descriptors

XNU uses different data formats for passing data in its api. The most standard way is using syscall arguments. But for complex data it often relies of sending memory saved by C structs. This packaged data transport mechanism is fragile and leads to broken interfaces between user space programs and kernel apis. libkdd directory holds user space library that can parse custom data provided by the same version of kernel. The kernel chunked data format is described in detail at libkdd/README.md.

Debugging the kernel

The xnu kernel supports debugging with a remote kernel debugging protocol (kdp). Please refer documentation at [technical note] TN2063 By default the kernel is setup to reboot on a panic. To debug a live kernel, the kdp server is setup to listen for UDP connections over ethernet. For machines without ethernet port, this behavior can be altered with use of kernel boot-args. Following are some common options.

  • debug=0x144 - setups debug variables to start kdp debugserver on panic
  • -v - print kernel logs on screen. By default XNU only shows grey screen with boot art.
  • kdp_match_name=en1 - Override default port selection for kdp. Supported for ethernet, thunderbolt and serial debugging.

To debug a panic'ed kernel, use llvm debugger (lldb) along with unstripped symbol rich kernel binary.

sh$ lldb kernel.development.unstripped

And then you can connect to panic'ed machine with kdp_remote [ip addr] or gdb_remote [hostip : port] commands.

Each kernel is packaged with kernel specific debug scripts as part of the build process. For security reasons these special commands and scripts do not get loaded automatically when lldb is connected to machine. Please add the following setting to your ~/.lldbinit if you wish to always load these macros.

settings set target.load-script-from-symbol-file true

The tools/lldbmacros directory contains the source for each of these commands. Please follow the README.md for detailed explanation of commands and their usage.

from  https://github.com/apple/darwin-xnu

contoso-creative-writer

 A creative writing multi-agent solution to help users write articles. 

 

Creative Writing Assistant: Working with Agents using Prompty (Python Implementation)

Open in GitHub Codespaces Open in Dev Containers

Table of Contents

App preview

Agent workflow preview

Contoso Creative Writer is an app that will help you write well researched, product specific articles. Enter the required information and then click "Start Work". To watch the steps in the agent workflow select the debug button in the bottom right corner of the screen. The result will begin writing once the agents complete the tasks to write the article.

This sample demonstrates how to create and work with AI agents driven by Azure OpenAI. It includes a FastAPI app that takes a topic and instruction from a user and then calls a research agent that uses the Bing Grounding Tool in Azure AI Agent Service to research the topic, a product agent that uses Azure AI Search to do a semantic similarity search for related products from a vector store, a writer agent to combine the research and product information into a helpful article, and an editor agent to refine the article that's finally presented to the user.

Features

This project template provides the following features:

Architecture Digram

Azure account requirements

IMPORTANT: In order to deploy and run this example, you'll need:

  • Azure account. If you're new to Azure, get an Azure account for free and you'll get some free Azure credits to get started. See guide to deploying with the free trial.
  • Azure subscription with access enabled for the Azure OpenAI Service. If your access request to Azure OpenAI Service doesn't match the acceptance criteria, you can use OpenAI public API instead.
    • Ability to deploy gpt-4o and gpt-4o-mini. Currently you will need at least 80TPM for gpt-4o to use the Bing Grounding tool.
    • We recommend using eastus2, as this region has access to all models and services required.
  • Azure subscription with access enabled for Bing Grounding
  • Azure subscription with access enabled for Azure AI Search

Getting Started

You have a few options for setting up this project. The easiest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally.

GitHub Codespaces

  1. You can run this template virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:

    Open in GitHub Codespaces

  2. Open a terminal window.

  3. Sign in to your Azure account. You'll need to login to both the Azure Developer CLI and Azure CLI:

    i. First with Azure Developer CLI

    azd auth login

    ii. Then sign in with Azure CLI

    az login --use-device-code
  4. Provision the resources and deploy the code:

    azd up

    You will be prompted to select some details about your deployed resources, including location. As a reminder we recommend East US 2 as the region for this project. Once the deployment is complete you should be able to scroll up in your terminal and see the url that the app has been deployed to. It should look similar to this Ingress Updated. Access your app at https://env-name.codespacesname.eastus2.azurecontainerapps.io/. Navigate to the link to try out the app straight away!

  5. Once the above steps are completed you can test the sample.

VS Code Dev Containers

A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project:

    Open in Dev Containers

  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.

  4. Install required packages:

     #activate virtual env
     python -m venv .venv 
     .\.venv\Scripts\activate #(use source ./venv/bin/activate for mac) 
    cd src/api
    pip install -r requirements.txt
  5. Once you've completed these steps jump to deployment.

Note

if you use Dev Containers on Windows and encounter this error when deploying

error executing step command 'provision' : failed running post hooks: 'postprovision' hook failed with exit code: '127', Path 'infra/hooks/postprovision.sh'. : exit code 127,

change the end of line sequence of file infra/hooks/postprovision.sh from CRLF to LF using Visual Studio Code. This option is available in the status bar at the bottom. This should resolve the issue.

Local environment

Prerequisites

Note for Windows users: If you are not using a container to run this sample, our hooks are currently all shell scripts. To provision this sample correctly while we work on updates we recommend using git bash.

Initializing the project

  1. Create a new folder and switch to it in the terminal, then run this command to download the project code:

    azd init -t contoso-creative-writer

    Note that this command will initialize a git repository, so you do not need to clone this repository.

  2. Install required packages:

    #activate virtual env
    python -m venv .venv 
    .\.venv\Scripts\activate #(use source ./venv/bin/activate for mac) 
    cd src/api
    pip install -r requirements.txt

Deployment

Once you've opened the project in Codespaces, Dev Containers, or locally, you can deploy it to Azure.

  1. Sign in to your Azure account. You'll need to login to both the Azure Developer CLI and Azure CLI:

    i. First with Azure Developer CLI

    azd auth login

    ii. Then sign in with Azure CLI

    az login --use-device-code

    If you have any issues with that command, you may also want to try azd auth login --use-device-code.

    This will create a folder under .azure/ in your project to store the configuration for this deployment. You may have multiple azd environments if desired.

  2. Provision the resources and deploy the code:

    azd up

    This project uses gpt-4o and `gpt-4o-mini which may not be available in all Azure regions. Check for up-to-date region availability and select a region during deployment accordingly. We recommend using East US 2 for this project.

    After running azd up, you may be asked the following question during Github Setup:

    Do you want to configure a GitHub action to automatically deploy this repo to Azure when you push code changes?
    (Y/n) Y

    You should respond with N, as this is not a necessary step, and takes some time to set up.

Testing the sample

This sample repository contains an agents folder that includes subfolders for each agent. Each agent folder contains a prompty file where the agent's prompty is defined and a python file with the code used to run it. Exploring these files will help you understand what each agent is doing. The agent's folder also contains an orchestrator.py file that can be used to run the entire flow and to create an article. When you ran azd up a catalogue of products was uploaded to the Azure AI Search vector store and index name contoso-products was created.

To test the sample:

  1. Run the example web app locally using a FastAPI server.

    First navigate to the src/api folder

    cd ./src/api

    Run the FastAPI webserver

    fastapi dev main.py

    Important Note: If you are running in Codespaces, you will need to change the visibility of the API's 8000 and 5173 ports to public in your VS Code terminal's PORTS tab. The ports tab should look like this:

    Screenshot showing setting port-visibility

    If you open the server link in a browser, you will see a URL not found error, this is because we haven't created a home url route in FastAPI. We have instead created a /get_article route which is used to pass context and instructions directly to the get_article.py file which runs the agent workflow.

    (Optional) We have created a web interface which we will run next, but you can test the API is working as expected by running this in the browser:

    http://127.0.0.1:8080/get_article?context=Write an article about camping in alaska&instructions=find specifics about what type of gear they would need and explain in detail
    
  2. Once the FastAPI server is running you can now run the web app. To do this open a new terminal window and navigate to the web folder using this command:

    cd ./src/web

    First install node packages:

    npm install

    Then run the web app with a local dev web server:

    npm run dev

    This will launch the app, where you can use example context and instructions to get started. On the 'Creative Team' page you can examine the output of each agent by clicking on it. The app should look like this:

    Change the instructions and context to create an article of your choice.

  3. For debugging purposes you may want to test in Python using the orchestrator Logic

    To run the sample using just the orchestrator logic use the following command:

    cd ./src/api
    python -m orchestrator
    

Tracing

To activate the Prompty tracing server:

export LOCAL_TRACING=true

Then start the orchestrator:

cd ./src/api
python -m orchestrator

Once you can see the article has been generated, a .runs folder should appear in the ./src/api . Select this folder and click the .tracy file in it. This shows you all the Python functions that were called in order to generate the article. Explore each section and see what helpful information you can find.

Evaluating results

Contoso Creative Writer uses evaluators to assess application response quality. The 4 metrics the evaluators in this project assess are Coherence, Fluency, Relevance and Groundedness. A custom evaluate.py script has been written to run all evaulations for you.

  1. To run the script run the following commands:
cd ./src/api
python -m evaluate.evaluate
  • Check: You see scores for Coherence, Fluency, Relevance and Groundedness.
  • Check: The scores are between 1 and 5
  1. To understand what is being evaluated open the src/api/evaluate/eval_inputs.jsonl file.
    • Observe that 3 examples of research, product and assignment context are stored in this file. This data will be sent to the orchestrator so that each example will have:
    • each example will have the evaluations run and will incoperate all of the context, research, products, and final article when grading the response.

Setting up CI/CD with GitHub actions

This template is set up to run CI/CD when you push changes to your repo. When CI/CD is configured, evaluations will in GitHub actions and then automatically deploy your app on push to main.

To set up CI/CD with GitHub actions on your repository, run the following command:

azd pipeline config

Guidance

Region Availability

This template uses gpt-4o and gpt-4o-mini which may not be available in all Azure regions. Check for up-to-date region availability and select a region during deployment accordingly

  • We recommend using East US 2

Costs

You can estimate the cost of this project's architecture with Azure's pricing calculator

Security

Note

When implementing this template please specify whether the template uses Managed Identity or Key Vault

This template has either Managed Identity or Key Vault built in to eliminate the need for developers to manage these credentials. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials. Additionally, we have added a GitHub Action tool that scans the infrastructure-as-code files and generates a report containing any detected issues. To ensure best practices in your repo we recommend anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled in your repos.

Resources

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct.

Resources:

For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Responsible AI Guidelines

This project follows below responsible AI guidelines and best practices, please review them before using this project:

from  https://github.com/Azure-Samples/contoso-creative-writer