Total Pageviews

Friday, 11 October 2019

inlets

Expose your local endpoints to the Internet

Build Status License: MIT Go Report Card Documentation

Intro

inlets combines a reverse proxy and websocket tunnels to expose your internal and development endpoints to the public Internet via an exit-node. An exit-node may be a 5-10 USD VPS or any other computer with an IPv4 IP address.
Why do we need this project? Similar tools such as ngrok or Argo Tunnel from Cloudflare are closed-source, have limits built-in, can work out expensive, and have limited support for arm/arm64. Ngrok is also often banned by corporate firewall policies meaning it can be unusable. Other open-source tunnel tools are designed to only set up a single static tunnel. inlets aims to dynamically bind and discover your local services to DNS entries with automated TLS certificates to a public IP address over a websocket tunnel.
When combined with SSL - inlets can be used with any corporate HTTP proxy which supports CONNECT.

Conceptual diagram for inlets

License & terms

Important
Developers wishing to use inlets within a corporate network are advised to seek approval from their administrators or management before using the tool. By downloading, using, or distributing inlets, you agree to the LICENSE terms & conditions. No warranty or liability is provided.

Who is behind this project?

inlets is brought to you by Alex Ellis. Alex is a CNCF Ambassador and the founder of OpenFaaS.
OpenFaaS® makes it easy for developers to deploy event-driven functions and microservices to Kubernetes without repetitive, boiler-plate coding. Package your code or an existing binary in a Docker image to get a highly scalable endpoint with auto-scaling and metrics. The project has around 19k GitHub stars, over 240 contributors and a growing number of end-users in production.
Become an Insider to receive regular Insider Updates on inlets, and all his other OSS work, blogs and videos via GitHub Sponsors

Goals

Initial goals:

  • automatically create endpoints on exit-node based upon client definitions
    • multiplex sites on same port and websocket through the use of DNS / host entries
  • link encryption using SSL over websockets (wss://)
  • automatic reconnect
  • authentication using service account or basic auth
  • automatic TLS provisioning for endpoints using cert-magic
    • configure staging or production LetsEncrypt issuer using HTTP01 challenge
  • native multi-arch with ARMHF/ARM64 support
  • Dockerfile and Kubernetes YAML files

Stretch goals:

  • discover and implement Service type LoadBalancer for Kubernetes - inlets-operator
  • tunnelling websocket traffic in addition to HTTP(s)
  • automatic configuration of DNS / A records
  • configuration to run "exit-node" as serverless container with Azure ACI / AWS Fargate
  • configure staging or production LetsEncrypt issuer using DNS01 challenge
  • get a logo for the project

Non-goals:

  • tunnelling plain TCP traffic over the websocket
    This use-case is covered by inlets-pro, ask me about early access to inlets-pro.

Status

Unlike HTTP 1.1 which follows a synchronous request/response model websockets use an asynchronous pub/sub model for sending and receiving messages. This presents a challenge for tunneling a synchronous protocol over an asynchronous bus.
inlets 2.0 introduces performance enhancements and leverages parts of the Kubernetes and Rancher API. It uses the same tunnelling packages that enable node-to-node communication in Rancher's k3s project. It is suitable for development and may be useful in production. Before deploying inlets into production, it is advised that you do adequate testing.
Feel free to open issues if you have comments, suggestions or contributions.
  • The tunnel link is secured via --token flag using a shared secret
  • The default configuration uses websockets without SSL ws://, but to enable encryption you could enable SSL wss://
  • A timeout for requests can be configured via args on the server
  • The upstream URL has to be configured on both server and client until a discovery or service advertisement mechanism is added The client can advertise upstream URLs, which it can serve
  • The tunnel transport is wrapped by default which strips CORS headers from responses, but you can disable it with the --disable-transport-wrapping flag on the server

Video demo

Using inlets I was able to set up a public endpoint (with a custom domain name) for my JavaScript & Webpack Create React App.
https://img.youtube.com/vi/jrAqqe8N3q4/hqdefault.jpg

What are people saying about inlets?

You can share about inlets using #inletsdev, #inlets, and https://inlets.dev.
inlets has trended on the front page of Hacker News twice.
Tutorials:
Twitter:
Note: add a PR to send your story or use-case, I'd love to hear from you.

Get started

You can install the CLI with a curl utility script, brew or by downloading the binary from the releases page. Once installed you'll get the inlets command.

Install the CLI

Utility script with curl:
# Install to local directory
curl -sLS https://get.inlets.dev | sh

# Install to /usr/local/bin/
curl -sLS https://get.inlets.dev | sudo sh
Via brew:
brew install inlets
Note: the brew distribution is maintained by the brew team, so it may lag a little behind the GitHub release.
Binaries are made available on the releases page for Linux (x86_64, armhf & arm64), Windows (experimental), and for Darwin (MacOS). You will also find SHA checksums available if you want to verify your download.

Test it out

You can run inlets between any two computers with connectivity, these could be containers, VMs, bare metal or even "loop-back" on your own laptop.
See how to provision an "exit-node" with a public IPv4 address using a VPS.
  • On the exit-node (or server)
Start the tunnel server on a machine with a publicly-accessible IPv4 IP address such as a VPS.
Example with a token for client authentication:
export token=$(head -c 16 /dev/urandom | shasum | cut -d" " -f1)
inlets server --port=8090 --token="$token"
Note: You can pass the --token argument followed by a token value to both the server and client to prevent unauthorized connections to the tunnel.
inlets server --port=8090
You can also run unprotected, but this is not recommended.
Note down your public IPv4 IP address.
  • Head over to your machine where you are running a sample service, or something you want to expose.
You can use my hash-browns service for instance which generates hashes.
Install hash-browns or run your own HTTP server
go get -u github.com/alexellis/hash-browns
cd $GOPATH/src/github.com/alexellis/hash-browns

port=3000 go run server.go
If you don't have Go installed, then you could run Python's built-in HTTP server:
mkdir -p /tmp/inlets-test/
cd /tmp/inlets-test/
touch hello-world
python -m SimpleHTTPServer 3000
  • On the same machine, start the inlets client
Start the tunnel client:
export REMOTE="127.0.0.1:8090"    # for testing inlets on your laptop, replace with the public IPv4
export TOKEN="CLIENT-TOKEN-HERE"  # the client token is found on your VPS or on start-up of "inlets server"
inlets client \
 --remote=$REMOTE \
 --upstream=http://127.0.0.1:3000 \
 --token $TOKEN
  • Replace the --remote with the address where your exit-node is running inlets server.
  • Replace the --token with the value from your server
We now have three processes:
  • example service running (hash-browns) or Python's webserver
  • an exit-node running the tunnel server (inlets server)
  • a client running the tunnel client (inlets client)
So send a request to the inlets server - use its domain name or IP address:
Assuming gateway.mydomain.tk points to 127.0.0.1 in /etc/hosts or your DNS server.
curl -d "hash this" http://127.0.0.1:8090/hash -H "Host: gateway.mydomain.tk"
# or
curl -d "hash this" http://127.0.0.1:8090/hash
# or
curl -d "hash this" http://gateway.mydomain.tk/hash
You will see the traffic pass between the exit node / server and your development machine. You'll see the hash message appear in the logs as below:
~/go/src/github.com/alexellis/hash-browns$ port=3000 go run server.go
2018/12/23 20:15:00 Listening on port: 3000
"hash this"
Now check the metrics endpoint which is built-into the hash-browns example service:
curl $REMOTE/metrics | grep hash
You can also use multiple domain names and tie them back to different internal services.
Here we start the Python server on two different ports, serving content from two different locations and then map it to two different Host headers, or domain names:
mkdir -p /tmp/store1
cd /tmp/store1/
touch hello-store-1
python -m SimpleHTTPServer 8001 &


mkdir -p /tmp/store2
cd /tmp/store2/
touch hello-store-2
python -m SimpleHTTPServer 8002 &
export REMOTE="127.0.0.1:8090"    # for testing inlets on your laptop, replace with the public IPv4
export TOKEN="CLIENT-TOKEN-HERE"  # the client token is found on your VPS or on start-up of "inlets server"
inlets client \
 --remote=$REMOTE \
 --token $TOKEN \
 --upstream="store1.example.com=http://127.0.0.1:8001,store2.example.com=http://127.0.0.1:8002"
You can now create two DNS entries or /etc/hosts file entries for store1.example.com and store2.example.com, then connet through your browser.

Development

For development you will need Golang 1.10 or 1.11 on both the exit-node or server and the client.
You can get the code like this:
go get -u github.com/alexellis/inlets
cd $GOPATH/src/github.com/alexellis/inlets
Contributions are welcome. All commits must be signed-off with git commit -s to accept the Developer Certificate of Origin.

Take things further

You can expose an OpenFaaS or OpenFaaS Cloud deployment with inlets - just change --upstream=http://127.0.0.1:3000 to --upstream=http://127.0.0.1:8080 or --upstream=http://127.0.0.1:31112. You can even point at an IP address inside or outside your network for instance: --upstream=http://192.168.0.101:8080.
You can build a basic supervisor script for inlets in case of a crash, it will re-connect within 5 seconds:
In this example the Host/Client is acting as a relay for OpenFaaS running on port 8080 on the IP 192.168.0.28 within the internal network.
Host/Client:
while [ true ] ; do sleep 5 && inlets client --upstream=http://192.168.0.28:8080 --remote=exit.my.club  ; done
Exit-node:
while [ true ] ; do sleep 5 && inlets server --upstream=http://192.168.0.28:8080 ; done

Bind a different port for the control-plane

You can bind two separate TCP ports for the user-facing port and the tunnel.
  • --port - the port for users to connect to and for serving data, i.e. the Data Plane
  • --control-port - the port for the websocket to connect to i.e. the Control Plane

Docker & Kubernetes application development

Docker images are published for x86_64 and armhf
  • alexellis2/inlets:2.3.2
  • alexellis2/inlets:2.3.2-armhf
Note: For Raspberry Pi, you need to use the image ending in -armhf.

Run as a deployment on Kubernetes

You can run the client inside Kubernetes to expose your local services to the Internet, or another network.
Here's an example showing how to get ingress into your cluster for your OpenFaaS gateway and for Prometheus:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inlets
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: inlets
  template:
    metadata:
      labels:
        app.kubernetes.io/name: inlets
    spec:
      containers:
      - name: inlets
        image: alexellis2/inlets:2.3.2
        imagePullPolicy: Always
        command: ["inlets"]
        args:
        - "client"
        - "--upstream=http://gateway.openfaas:8080,http://prometheus.openfaas:9090"
        - "--remote=your-public-ip"
Replace the line: - "--remote=your-public-ip" with the public IP belonging to your VPS.
Alternatively, see the unofficial helm chart from the community: inlets-helm.
Note: For Raspberry Pi, you need to use the image ending in -armhf.

Use authentication from a Kubernetes secret

In production, you should always use a secret to protect your exit-node. You will need a way of passing that to your server and inlets allows you to read a Kubernetes secret.
  • Create a random secret
$ kubectl create secret generic inlets-token --from-literal token=$(head -c 16 /dev/urandom | shasum | cut -d" " -f1)
secret/inlets-token created
  • Or create a secret with the value from your remote server
$ export TOKEN=""
$ kubectl create secret generic inlets-token --from-literal token=${TOKEN}
secret/inlets-token created
  • Bind the secret named inlets-token to the Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inlets
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: inlets
  template:
    metadata:
      labels:
        app.kubernetes.io/name: inlets
    spec:
      containers:
      - name: inlets
        image: alexellis2/inlets:2.3.2
        imagePullPolicy: Always
        command: ["inlets"]
        args:
        - "client"
        - "--remote=ws://REMOTE-IP"
        - "--upstream=http://gateway.openfaas:8080"
        - "--token-from=/var/inlets/token"
        volumeMounts:
          - name: inlets-token-volume
            mountPath: /var/inlets/
      volumes:
        - name: inlets-token-volume
          secret:
            secretName: inlets-token
Optional tear-down:
$ kubectl delete deploy/inlets
$ kubectl delete secret/inlets-token

Use your Kubernetes cluster as an exit-node

You can use a Kubernetes cluster which has public IP addresses, an IngressController, or a LoadBalancer to run one or more exit-nodes.
  • Create a random secret
$ kubectl create secret generic inlets-token --from-literal token=$(head -c 16 /dev/urandom | shasum | cut -d" " -f1)
secret/inlets-token created
  • Or create a secret with the value from your remote server
$ export TOKEN=""
$ kubectl create secret generic inlets-token --from-literal token=${TOKEN}
secret/inlets-token created
  • Create a Service
apiVersion: v1
kind: Service
metadata:
  name: inlets
  labels:
    app: inlets
spec:
  type: ClusterIP
  ports:
    - port: 8000
      protocol: TCP
      targetPort: 8000
  selector:
    app: inlets
  • Create a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inlets
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: inlets
  template:
    metadata:
      labels:
        app.kubernetes.io/name: inlets
    spec:
      containers:
      - name: inlets
        image: alexellis2/inlets:2.3.2
        imagePullPolicy: Always
        command: ["inlets"]
        args:
        - "server"
        - "--token-from=/var/inlets/token"
        volumeMounts:
          - name: inlets-token-volume
            mountPath: /var/inlets/
      volumes:
        - name: inlets-token-volume
          secret:
            secretName: inlets-token
You can now create an Ingress record, or LoadBalancer to connect to your server. Note that clients connecting to this server will have to specify port 8000 for their remote, as the default is 80.

Try inlets with KinD (Kubernetes in Docker)

Try this guide to expose services running in a KinD cluster:
Micro-tutorial inlets with KinD

Run on a VPS

Provisioning on a VPS will see inlets running as a systemd service. All the usual service commands should be used with inlets as the service name.
Inlets uses a token to prevent unauthorized access to the server component. A known token can be configured by amending userdata.sh prior to provisioning
# Enables randomly generated authentication token by default.
# Change the value here if you desire a specific token value.
export INLETSTOKEN=$(head -c 16 /dev/urandom | shasum | cut -d" " -f1)
If the token value is randomly generated then you will need to access the VPS in order to obtain the token value.
cat /etc/default/inlets

How do I enable TLS / HTTPS?

  • Create a DNS A record for your exit-node IP and the DNS entry exit.domain.com (replace as necessary).
  • Download Caddy from the Releases page.
  • Enter this text into a Caddyfile replacing exit.domain.com with your subdomain.
exit.domain.com

proxy / 127.0.0.1:8000 {
  transparent
}

proxy /tunnel 127.0.0.1:8000 {
  transparent
  websocket
}
  • Run inlets server --port 8000
  • Run caddy
Caddy will now ask you for your email address and after that will obtain a TLS certificate for you.
  • On the client run the following, adding any other parameters you need for --upstream
inlets client --remote wss://exit.domain.com
Note: wss indicates to use port 443 for TLS.
You now have a secure TLS link between your client(s) and your server on the exit node and for your site to serve traffic over.

Where can I get a cheap / free domain-name?

You can get a free domain-name with a .tk / .ml or .ga TLD from https://www.freenom.com - make sure the domain has at least 4 letters to get it for free. You can also get various other domains starting as cheap as 1-2USD from https://www.namecheap.com
Namecheap provides wildcard TLS out of the box, but freenom only provides root/naked domain and a list of sub-domains. Domains from both providers can be moved to alternative nameservers for use with AWS Route 53 or Google Cloud DNS - this then enables wildcard DNS and the ability to get a wildcard TLS certificate from LetsEncrypt.
My recommendation: pay to use Namecheap.

Where can I host an inlets exit-node?

You can use inlets to provide incoming connections to any network, including containers, VM and AWS Firecracker.
Examples:
  • Green to green - from one internal LAN to another
  • Green to red - from an internal network to the Internet (i.e. Raspberry Pi cluster)
  • Red to green - to make a service on a public network accessible as if it were a local service.
The following VPS providers have credit, or provisioning scripts to get an exit-node in a few moments.
Installation scripts have been provided which use systemd as a process supervisor. This means that if inlets crashes, it will be restarted automatically and logs are available.
  • After installation, find your token with sudo cat /etc/default/inlets
  • Check logs with sudo systemctl status inlets
  • Restart with sudo systemctl restart inlets
  • Check config with sudo systemctl cat inlets
DigitalOcean
If you're a DigitalOcean user and use doctl then you can provision a host with ./hack/provision-digitalocean.sh. Please ensure you have configured droplet.create.ssh-keys within your ~/.config/doctl/config.yaml.
DigitalOcean will then email you the IP and root password for your new host. You can use it to log in and get your auth token, so that you can connect your client after that.
Datacenters for exit-nodes are available world-wide
Civo
Civo is a UK developer cloud and offers 50 USD free credit.
Installation is currently manual and the datacenter is located in London.
  • Create a VM of any size and then download and run inlets as a server
  • Copy over ./hack/userdata.sh and run it on the server as root
Scaleway
Scaleway offer probably the cheapest option at 1.99 EUR / month using the "1-XS" from the "Start" tier.
If you have the Scaleway CLI installed you can provision a host with ./hack/provision-scaleway.sh.
Datacenters include: Paris and Amsterdam.

Running over an SSH tunnel

You can tunnel over SSH if you are not using a reverse proxy that enables SSL. This encrypts the traffic over the tunnel.
On your client, create a tunnel to the exit-node:
ssh -L 8000:127.0.0.1:80 exit-node-ip
Now for the --remote address use --remote ws://127.0.0.1:8000

from https://github.com/alexellis/inlets 
-----------------------------------------------------------------------

inlets —— 利用 WebSocket 隧道实现内网穿透

使用 inlets 搞 HTTP 服务内网穿透蛮久了,从最早几百 Stars 就开始关注。相比于另一个流行的内网穿透工具 frp,inlets 隧道协议基于 WebSocket,也就是说可以完全享受到 TLS 的安全性,还可以通过各类反向代理、甚至是 Kubernetes 的 Ingress。另外该项目设计时就考虑到了与 Kubernetes 集成,作者也比较注重云原生方向,并且还打包了多个架构的 Docker 镜像,因此非常符合我的需求。

可惜 inlets 在国内似乎并没有像 frp 那么高的知名度,我认为缺少中文 README 是原因之一。所以最近为此项目翻译了文档;不过 PR 还没有合并,我猜可能是因为项目作者不具备中文技能,担心翻译质量等问题。有兴趣的同学欢迎在 PR 内发起建议、讨论,或是为它点个  Reaction,帮助 PR 尽早合并,谢谢。

PR 地址:https://github.com/inlets/inlets/pull/142

以下是中文文档。

inlets
将你的本地服务暴露到公网。

简介
inlets 利用反向代理和 Websocket 隧道,将内部、或是开发中的服务通过「出口节点」暴露到公网。出口节点可以是几块钱一个月的 VPS,也可以是任何带有公网 IPv4 的电脑。

为什么需要这个项目?类似的工具例如 ngrok 和由 Cloudflare 开发的 Argo Tunnel 皆为闭源,内置了一些限制,并且价格不菲,以及对 arm/arm64 的支持很有限。Ngrok 还经常会被公司防火墙策略拦截而导致无法使用。而其它开源的隧道工具,基本只考虑到静态地配置单个隧道。inlets 旨在动态地发现本地服务,通过 Websocket 隧道将它们暴露到公网 IP 或域名,并自动化配置 TLS 证书。

当开启 SSL 时,inlets 可以通过任何支持 CONNECT 方法的 HTTP 代理服务器。



inlets 概念示意图

协议与条款
重要

如您需要在企业网络中使用 inlets,建议先征求 IT 管理员的同意。下载、使用或分发 inlets 前,您必须同意 协议 条款与限制。本项目不提供任何担保,亦不承担任何责任。

幕后的开发者是谁?
inlets 由 Alex Ellis 开发。Alex 是一名 CNCF 大使,同时是 OpenFaaS 的创始人。

OpenFaaS® 使得开发者将由事件驱动的函数和微服务部署到 Kubernetes 更加容易,而无需编写重复的样板代码。把代码或现成的二进制文件打包进 Docker 镜像,即可获得带有自动扩容和监控指标的服务入口。该项目目前已有接近 19k GitHub stars,超过 240 名贡献者;越来越多的用户已将它应用到生产环境。

待办事项与目标
已完成
基于客户端的定义,自动在出口节点创建服务入口
通过 DNS / 域名实现单端口、单 Websocket 承载多站点
利用 SSL over Websockets 实现链路加密(wss://)
自动重连
通过 Service Account 或 HTTP Basic Auth 实现权限认证
使用 cert-magic 自动申请 TLS 证书
通过 HTTP01 challenge 使用 LetsEncrypt Staging 或 Production 签发证书
原生跨平台支持,包括 ARMHF 和 ARM64 架构
提供 Dockerfile 以及 Kubernetes YAML 文件
自动发现并实例化 Kubernetes 集群内 LoadBalancer 类型的 Service - inlets-operator
除 HTTP (s) 以外,还支持在隧道内传输 Websocket 流量
为该项目制作一枚 Logo
延伸目标
自动配置 DNS / A 记录。
基于 Azure ACI 和 AWS Fargate,以 Serverless 容器的方式运行「出口节点」。
通过 DNS01 challenge 使用 LetsEncrypt Staging 或 Production 签发证书
非本项目的目标
通过 Websocket 隧道传输原始 TCP 流量。

inlets-pro 涵盖了该使用场景,您可以向我咨询 inlets-pro 的内测事宜(English Only)。

项目状态
与 HTTP 1.1 遵循同步的请求 / 响应模型不同,Websocket 使用异步的发布 / 订阅模型来发送和接收消息。这带来了一些挑战 —— 通过 异步总线 隧道化传输 同步协议。

inlets 2.0 带来了性能上的提升,以及调用部分 Kubernetes 和 Rancher API 的能力。本项目使用了 Rancher 的 K3s 项目 实现节点间通讯同样的隧道依赖包。它非常适用于开发,在生产环境中也很实用。不过在部署 inlets 到生产环境中之前,建议先做好充足的测试。

如果您有任何评论、建议或是贡献想法,欢迎提交 Issue 讨论。

隧道链路通过 --token 选项指定的共享密钥保证安全
默认配置使用不带 SSL 的 Websocket ws://,但支持开启加密,即启用 SSL wss://
可通过服务器端选项设定请求超时时间
服务发现机制完成前,在服务端和客户端都必须配置上游 URL 客户端可发布其可提供服务的上游 URLs
默认情况下,隧道传输会移除响应内的 CORS 头,但你可以在服务端使用 --disable-transport-wrapping 关闭该特性
相关项目
Inlets 作为服务代理 已被列入 Cloud Native Landscape

inlets - 开源的七层 HTTP 隧道和反向代理
inlets-pro - 四层 TCP 负载均衡
inlets-operator - 深度集成 Inlets 和 Kubernetes,实现 LoadBalancer 类型的 Service
inletsctl - 配置出口节点的 CLI 工具,配合 inlets 和 inlets-pro 使用.

inlets 曾两次登上 Hacker News 首页推荐:
https://news.ycombinator.com/item?id=19189455
https://news.ycombinator.com/item?id=20410552

相关教程(英文):
https://blog.alexellis.io/ingress-for-your-local-kubernetes-cluster/
https://blog.alexellis.io/webhooks-are-great-when-you-can-get-them/
https://gist.github.com/alexellis/c29dd9f1e1326618f723970185195963
https://sysadmins.co.za/the-awesomeness-of-inlets/
https://medium.com/k8spin/what-does-fit-in-a-low-resources-namespace-3rd-part-inlets-6cc278835e57
https://blog.baeke.info/2019/07/17/exposing-a-local-endpoint-with-inlets/
https://twitter.com/BanzaiCloud/status/1164168218954670080
https://www.gitpod.io/blog/local-services-in-gitpod/

开始使用
你可以使用 curl 下载安装脚本,或是用 brew 安装,或者直接在 Releases 页面直接下载二进制文件。安装完成后即可使用 inlets 命令。

安装 CLI
提示:虽然 inlets 是一款免费工具,但你也可以在 GitHub Sponsors 页面支持后续的开发 💪

使用 curl 和辅助脚本:

# 安装到当前目录
curl -sLS https://get.inlets.dev | sh

# 安装到 /usr/local/bin/
curl -sLS https://get.inlets.dev | sudo sh
使用 brew:

brew install inlets
提示:brew 分发的版本由 Homebrew 团队维护,因此可能会与 GitHub releases 存在一定延迟。

二进制文件可在 Releases 页面 找到;包含 Linux(x86_64、armhf、arm64),Windows(实验性)以及 Darwin(MacOS)版本。如果你想要验证你的下载,也可以查看 SHA 校验值。

鼓励 Windows 用户使用 Git bash 来安装 inlets。

入门教程
你可以在任何两台互相连接的「电脑」之间运行 inlets,「电脑」可以是两个容器,虚拟机,物理机,甚至你笔记本的环回网络也可以。

推荐阅读https://learnku.com/articles/docs/vps.md

以下步骤在 出口节点(又称服务端)执行。
首先在任何有公网 IP 的机器上(例如 VPS)启动隧道服务端。

例子如下,生成客户端认证的 Token 并启动服务端:

export token=$(head -c 16 /dev/urandom | shasum | cut -d" " -f1)
inlets server --port=8090 --token="$token"
提示:同时在服务端和客户端配置 --token 选项和密钥,可避免未授权地连接到隧道。

inlets server --port=8090
也可以像上面这样完全无保护地运行,但是并不推荐。

随后记下你的公网 IP。

接下来到运行 HTTP 服务的机器。
你可以使用我开发的 hash-browns 服务作为测试,该服务可生成哈希值。

export GO111MODULE=off
export GOPATH=$HOME/go/

go get -u github.com/alexellis/hash-browns
cd $GOPATH/src/github.com/alexellis/hash-browns

port=3000 go run server.go
如果你没安装 Golang,也可以运行 Python 内置的 HTTP 服务:

mkdir -p /tmp/inlets-test/
cd /tmp/inlets-test/
touch hello-world
python -m SimpleHTTPServer 3000
在同一台机器上,启动 inlets 客户端。
启动隧道客户端:

export REMOTE="127.0.0.1:8090"    # 替换成刚刚记下的公网 IP
export TOKEN="CLIENT-TOKEN-HERE"  # Token 的值可在刚刚启动 "inlets server" 时找到
inlets client
 --remote=$REMOTE
 --upstream=http://127.0.0.1:3000
 --token $TOKEN
务必替换 --remote 的值为运行 inlets server (即出口节点)的 IP。
务必将 --token 的值与服务端保持一致。
我们现在总开启了三个进程:

用于测试的 HTTP 服务(运行 hash-browns 或是 Python Web 服务器)
出口节点运行着的隧道服务(inlets server)
隧道客户端(inlets client)
接下来是时候给 inlets 服务端发请求了,用指向它的域名或 IP 均可:

假设你的服务端位于 127.0.0.1,使用 /etc/hosts 文件或是 DNS 服务将域名 gateway.mydomain.tk 指向 127.0.0.1。

curl -d "hash this" http://127.0.0.1:8090/hash -H "Host: gateway.mydomain.tk"
# 或
curl -d "hash this" http://127.0.0.1:8090/hash
# 或
curl -d "hash this" http://gateway.mydomain.tk/hash
你会看到有流量通过隧道客户端到出口节点,如果你运行的是 hash-browns 服务,会出现类似下面的日志:

~/go/src/github.com/alexellis/hash-browns$ port=3000 go run server.go
2018/12/23 20:15:00 Listening on port: 3000
"hash this"
顺便还可以看看 hash-browns 服务内置的 Metrics 数据:

curl $REMOTE/metrics | grep hash
此外你还可以使用多个域名,并将它们分别绑定到多个内网服务。

这里我们在两个端口上启动 Python Web 服务,分别将两个本地目录作为服务内容,并将它们映射到不同的 Host 头,也就是域名:

mkdir -p /tmp/store1
cd /tmp/store1/
touch hello-store-1
python -m SimpleHTTPServer 8001 &

mkdir -p /tmp/store2
cd /tmp/store2/
touch hello-store-2
python -m SimpleHTTPServer 8002 &
export REMOTE="127.0.0.1:8090"    # 替换成刚刚记下的公网 IP
export TOKEN="CLIENT-TOKEN-HERE"  # Token 的值可在刚刚启动 "inlets server" 时找到
inlets client
 --remote=$REMOTE
 --token $TOKEN
 --upstream="store1.example.com=http://127.0.0.1:8001,store2.example.com=http://127.0.0.1:8002"
随后修改 store1.example.com 和 store2.example.com 的 DNS 指向或设置 /etc/hosts 文件,即可通过浏览器访问了。

继续深入
文档与特色教程:
https://blog.alexellis.io/https-inlets-local-endpoints/
https://learnku.com/articles/docs/kubernetes.md
https://learnku.com/articles/docs/vps.md
https://blog.alexellis.io/ingress-for-your-local-kubernetes-cluster/

单出口节点,多服务:
你可以通过 inlets 暴露 OpenFaaS 或 OpenFaaS Cloud deployment,只需要将 --upstream=http://127.0.0.1:3000 改为 --upstream=http://127.0.0.1:8080 或是 --upstream=http://127.0.0.1:31112 即可。甚至可以指向任何内网或是外网 IP 地址,例如:--upstream=http://192.168.0.101:8080。

为控制平面设定独立端口:
你可以为用户访问和隧道传输分别指定不同的端口。

--port - 指定用户访问、提供对外服务的端口,又称 数据平面
--control-port - 指定底层 Websocket 隧道连接的端口,又称 控制平面

开发指引
首先需要在出口节点和客户端都安装 Golang 1.10 或 1.11。
使用类似如下命令获取代码:
go get -u github.com/inlets/inlets
cd $GOPATH/src/github.com/inlets/inlets
另外,你也可以使用 Gitpod(https://gitpod.io/) 一键在浏览器中配置好开发环境:
https://gitpod.io/#https://github.com/inlets/inlets

附录
其它 Kubernetes 端口转发工具:
kubectl port-forward - built into the Kubernetes CLI, forwards a single port to the local computer.

kubefwd(https://github.com/txn2/kubefwd) - Kubernetes utility to port-forward multiple services to your local computer.

kurun(https://github.com/banzaicloud/kurun)- Run main.go in Kubernetes with one command, also port-forward your app into Kubernetes.

转自链接:https://learnku.com/articles/39464
-------------------------------------

又一款内网穿透工具——inlets 

前言

随着IPV4地址的不断匮乏,现在很多电信运营商早就开始将用户的上网IP私网化,即我们常说的内网地址,这样的地址是经过NAT的,很多用户通过运营商提供的内网地址共享一个公网地址进行上网,运营商的IP资源节省了,普通用户也感觉不出什么区别,但对于需要外部访问家中或者公司中一些资源的用户来说,这就是灾难,也许有人说你可以打电话投诉来获取公网IP,电信联通可能还行得通,对于大内网的移动就无法实现了,为了解决访问内部资源的这个问题,内网穿透工具应运而生,内网穿透工具也如雨后春笋一般扎堆出现了Ngrok,Frp,Serveo乃至已经商业化很成功的花生壳,这类工具虽然已经很成熟了,但这并不是本篇分享的重点。通过github的推荐,我发现了一个有趣的内网穿透工具——inlets

为啥推荐它

有时候第一眼的感觉很重要,它就是那个吸引我的工具,虽然我现在的主力依然是Frp,但我真的觉得inlets值得一试

inlets 结合了反向代理和 websocket 隧道,通过出口节点将内部的http服务暴露给公网。出口节点可以是 VPS 或具有 IPv4 IP 地址的任何其他计算机,与SSL结合使用时inlets可以与任何支持CONNECT的内网HTTP代理一起使用

它能做啥

开发者Alex Ellis更新还是很频繁的,由1.x升级到2.x之后,作者直接声称其完全可以胜任生产环境,只不过在部署之前建议先测试一下
  • 基础功能:

1.根据客户端设置在远程服务器上创建端口
2.基于域名进行端口复用
3.利用SSL over websocket进行安全的加密通信
4.断线重连
5.支持身份认证
6.多平台支持
7.与Docker和Kubernetes集成
8.原生跨平台支持,包括ARMHF和ARM64架构
9.除HTTP(s)以外,还支持在隧道内传输Websocket流量

  • 计划功能支持:

1.自动配置DNS/A记录
2.基于 Azure ACI 和 AWS Fargate,以 Serverless 容器的方式运行「出口节点」。
3.通过 DNS01 challenge 使用 LetsEncrypt Staging 或 Production 签发证

虽然此时可以穿透的协议还只局限于HTTP或者HTTPS?,但作者还是愿意在今后的更新中添加对纯TCP流量的支持;未来肯定会越来越美好,有能力的小伙伴不妨也为其提供自己的一份力量

运行原理

原理图


 

流程说明

稍微概括一下就是类似于如下的流程

本地应用端口 <==http(s)/ws(s)==> inlets(client) <==ws(s)==> inlets(server) <==http(s)==> 用户浏览器

实例


光讲原理好像也没啥意义,那就用个人的一个实例来说明吧!

实例说明

inlets之间使用websocket进行通讯,如果需要证书加密的话,即ssl加密,那中间就需要套接一个web服务器,它可以是nginx,也可以是caddy,此处使用配置相对简单的caddy进行演示,当然选择它的还有一个原因就是它支持自动签发证书,由于google的“恶意推广”,https网站将成为主流,给网站配置个证书也不错

先决条件

  • 需要拥有一台拥有公网IP的主机,如果主机在国内,并且还需要开放80端口和443端口,还需要解决备案的问题
  • 拥有一个域名,并且可以随意修改DNS解析,免费的,收费的都可以
  • 耐心很重要,由于涉及到两台主机的操作,暂时还没有一键脚本可以使用
  • 服务端和客户端只能使用64位的系统,当然现在也已经支持arm架构的硬件设备进行安装了

实验环境

也许这就是我自己的瞎折腾,但是希望能给大家以启发

  • 域名:etspace.xyz
  • 服务器公网ip:165.227.56.252
  • 本地客户端主机:NanoPi NEO2(很不幸被我买了) ESXi主机上文件存储服务器
  • 需要对外发布的服务:自用黑裙登录页面以及软路由netdata状态页 软路由netdata状态页及routeros登陆页

配置DNS解析

vps主机肯定是有的,此处就不写怎么购买了,现在进入你的DNS管理后台,修改域名的解析地址,我使用的是he.net,此时进入后台,添加相应的解析记录

此处需要设置三个解析记录:

域名 解析类型 计划用途
inlets.etspace.xyz A 用于主控制通信
routeros.etsapce.xyz A 映射本地routeros登陆界面
netdata.etspace.xyz A 映射本地netdata状态界面

服务端上inlets的安装和配置

由于inlets是使用go语言进行编写的,只需要下载对应的二进制文件即可,进入release中选择合适的平台进行下载,重新登录项目地址,发现作者已经增加了不少的功能,比较可惜的是全功能的内网穿透工具inlets-pro并不是免费的,如果有能力还是支持一下作者,而对于普通用户而言,inlets已经可以满足大部分功能了

  • 安装inlets

官方一键命令curl -sLS https://get.inlets.dev | sh

此处也可以使用作者新增了一个工具inletsctl,并且能通过这个工具进行更新及配置,一般情况我使用它来进行intels的安装,不再需要每次登录github进行下载更新了

  • 安装inletsctl

使用官方命令一键安装curl -sLSf https://inletsctl.inlets.dev | sh
此处图片—-

  • 通过inletsctl 下载安装inlets

inletsctl download inlets进行安装,inlets会被自动安装到/usr/local/bin/目录下,此时就可以适应inlets命令了

在执行后续的操作之前,首先建议了解一下inlets的命令和参数

通过help命令直接查看

inlets --help

Inlets combines a reverse proxy and websocket tunnels to expose your internal
and development endpoints to the public Internet via an exit-node.

An exit-node may be a 5-10 USD VPS or any other computer with an IPv4 IP address.
You can also use inlets to bridge connect between private networks.

It is strongly recommended to put a reverse proxy with TLS/SSL enabled such as
Nginx or Caddy in front of your inlets server to enable an encrypted tunnel.

See: https://github.com/inlets/inlets for more information.

Usage:
inlets [flags]
inlets [command]

Available Commands:
client Start the tunnel client.
help Help about any command
server Start the tunnel server.
version Display the clients version information.

Flags:
-h, --help help for inlets

Use "inlets [command] --help" for more information about a command.
inlets server --help
Start the tunnel server on a machine with a publicly-accessible IPv4 IP
address such as a VPS.

Example: inlets server -p 80
Example: inlets server --port 80 --control-port 8080

Note: You can pass the --token argument followed by a token value to both the
server and client to prevent unauthorized connections to the tunnel.

Usage:
inlets server [flags]

Flags:
-c, --control-port int control port for tunnel (default 8080)
--disable-transport-wrapping disable wrapping the transport that removes CORS headers for example
-h, --help help for server
-p, --port int port for server and for tunnel (default 8000)
--print-token prints the token in server mode (default true)
-t, --token string token for authentication
-f, --token-from string read the authentication token from a file
inlets client --help

Start the tunnel client.

Example: inlets client --remote=192.168.0.101:80 --upstream=http://127.0.0.1:3000
Note: You can pass the --token argument followed by a token value to both the server and client to prevent unauthorized connections to the tunnel.

Usage:
inlets client [flags]

Flags:
-h, --help help for client
--print-token prints the token in server mode (default true)
-r, --remote string server address i.e. 127.0.0.1:8000 (default "127.0.0.1:8000")
-t, --token string authentication token
-f, --token-from string read the authentication token from a file
-u, --upstream string upstream server i.e. http://127.0.0.1:3000
  • 生成预共享密钥

根据上述的帮助信息,需要注意 -t/-f 这两个参数,服务端和客户端之间需要一种方式来进行身份认证,谁都能接入那就不乱套了!此处使用预共享密钥来进行认证,-t参数对应一个字符串,-f参数对应一个文件,其实就是把密钥存储到一个文件里就行了

基于项目的说明,直接使用系统的随机数进行生成即可,执行echo $(head -c 16 /dev/urandom | shasum | cut -d" " -f1),所生产的那串字符就是密钥,你可以直接复制了用,或者存储成一个文件,那样下载到本地就可以给客户端使用了

  • 启动服务端进行连接测试

直接在tmux中运行,执行命令inlets server -t ssssss -p 8000,然后快捷键ctrl+b然后按一下d就能将其切换到后台,此处server这个参数决定了以服务端模式运行,客户端就是client

服务端启动了,那么本地客户端也可以启动了,在本地的nanopi上也安装好inlets,测试连接一下服务端,执行:

inlets client -r 165.227.56.252:8000 -t ssss -u http://172.16.1.2:5000

这条命令的意思是inlets以客户端模式运行,远程连接地址是165.227.56.252,端口8000,密钥是ssss,映射本地的http://172.16.1.2:5000 这个地址,此时在浏览器上输入165.227.56.252:8000就相当于访问本地的http://172.16.1.2:5000 ,此处是我群晖的登陆界面

此处成功进行了连接,也实现了内网穿透,但这样的效果并不是我们所想要的,只能穿透一个算什么牛逼的工具,大家不要着急,请看我后续的操作

  • 实现端口复用和多终端

单靠inlets本身实现这些功能并不现实,但我们可以拉来一个小伙伴做辅助即可,官方的实例使用的是caddy,那我们也用这个吧,有了它,证书的事情也可以迎刃而解,至于它该怎么安装,并不是本文的重点,就不写了(请允许我偷懒一下)

这里需要注意的是caddy此时使用的是v1版本,配置文件也是基于此版本,v2版本的配置上有比较大的改变,暂时还没把wiki看明白,有兴趣的朋友可以去看看,谷歌搜索caddy第一个出来的就是v2版本,大家一定要注意哦

将inlets配置成服务并且开机启动

其实就是写个开机脚本,直接在/etc/systemd/system/目录下新建inlets.service文件,写入如下内容

[Unit]
Description=Inlets Server Service
After=network.target

[Service]
Type=simple
Restart=always
RestartSec=1
StartLimitInterval=0
EnvironmentFile=/etc/default/inlets #注意一下这条配置
ExecStart=/usr/local/bin/inlets server --port=8000 --token="${AUTHTOKEN}"

[Install]
WantedBy=multi-user.target
  • 此处使用了一个变量${AUTHTOKEN},这个指代的就是上面生成的那串密钥,本着偷懒的本性,我直接将其加入了系统变量,那么如何加入进去呢?这就要使用这个启动脚本中所设定的变量文件/etc/default/inlets,直接执行echo "export AUTHTOKEN=$(head -c 16 /dev/urandom | shasum | cut -d" " -f1)" > /etc/default/inlets即可,不放心可以查看一下文件中的内容

  • 端口号可以看情况自己设定,如果不指定的话默认就是8000,但一定要和后续编写的caddyfile文件相同

  • 客户端的启动脚本也可以参照这样编写

后续的工作就是设置允许开机启动,并且让inlets在后台运行

systemctl enable inlets
systemctl start inlets
  • 编写CaddyFile文件

之所以使用caddy完全是因为其两个优点:其一,配置简单;其二,可以自动申请证书,怎么使用我就不再赘述了,想要完全发挥它的功能,可以去查看官方文档,这里的所有配置主要针对inlets进行配置

inlets的服务端和客户端之间是通过websocket进行连接的,所以caddy在这里需要将这个连接转发到inlets上,同时还要监听客户端对目的网址的http或者https访问请求,同样需要将连接转发到inlets上,那么配置文件该怎么写呢?

inlets.etspace.xyz {
tls urname@some-mail.com
proxy / 127.0.0.1:8000 {
transparent
}
proxy /tunnel 127.0.0.1:8000 {
transparent
websocket
}
}

qh.etspace.xyz {
tls ddxiong0410@gmail.com
proxy / 127.0.0.1:8000 {
transparent
}
}

netdata.etspace.xyz {
tls ddxiong0410@gmail.com
proxy / 127.0.0.1:8000 {
transparent
}
}

此处可以发现主控的域名下会比其它两个域名多了点内容,主要是需要将websocket的连接请求透明转发到inlets的监听端口上,默认inlets客户端模式会通过websocket协议访问域名的/tunnel目录,当caddy接受到这样的连接请求是就会透明转发到inlets的8000端口上,然后服务端和客户端通过预共享密钥进行认证,之后的事情就顺理成章了~

主控域名只需要一个,它承载了客户端和服务端之间的主要的流量,其他设置的域名都想到于是内网对外映射的入口,按照需要进行添加和设置即可

总体而言,inlets所需要的加密和对外连接主要需要一个主域名加上若干个服务域名构成(当然,如果只需要映射一个服务的话,主域名即可以当认证端又可以当映射入口,上述配置文件中<inlets.etspace.xyz>域名下所设置的两个proxy选项就是以此为目的设置的

配置文件写好了,重启一下caddy,systemctl restart caddy,此时caddy会自动为配置文件中的三个域名申请Let’s Encrypt证书,有了证书加持,用户访问这个入口的数据也会是加密的,提高一定的安全性

  • inlets客户端配置

在本地的nanopi上编写启动文件,参照一下服务端的配置,个人使用的如下:

[Unit]
Description=Inlets Client Service
After=network.target

[Service]
Type=simple
Restart=always
RestartSec=1
StartLimitInterval=0
EnvironmentFile=/etc/default/inlets
ExecStart=/usr/local/bin/inlets client --remote="${REMOTEHOST}" --token="${AUTHTOKEN}" --upstream="${UPSTREAM}"

[Install]
WantedBy=multi-user.target

inlets由服务端模式改成客户端模式,相应的参数也有一些变化,token的部分不变,多了remoteupstream,同样的,我依然会在变量文件中对他们进行设置,这样也是为了方便修改和重启

直接编辑/etc/default/inlets文件,此时的内容如下:

export AUTHTOKEN=ssssss
export REMOTEHOST=wss://inlets.etspace.xyz
export UPSTREAM="qh.etspace.xyz=http://172.16.1.2:5000,netdata.etspace.xyz=http://172.16.1.10:1999"

由于证书的加入,客户端和服务端之间使用了加密通讯,那么协议类型就变成了wss,而映射的本地服务需要将域名和本地ip以及端口号一一对应,有多少个就加多少个,各个项目直接通过,隔开

编辑完上述的文件后,就可以将inlets设置成开机启动,然后进行一下测试了

systemctl enable inlets
systemctl start inlets
systemctl status inlets
  • 测试一下

直接打开https://qh.etspace.xyz看看是什么样的,基本上就可以很顺利的在外面打开家里的群晖登录界面,https://netdata.etspace.xyz也一样,那么如果需要建立更多的映射,就按照这样的流程进行配置即可

总结

inlets可能功能上没有Frp强,但却是一个很好玩的工具,对于一些只需要进行网页端控制的用户而言,不仅做到了对外映射,还能配合caddy进行证书加密,体验还是很不错的,端口复用,也不用担心会和v2ray这样的科学上网发生冲突,或者说,由于inlets的加入也有可能增加一定的混淆能力,毕竟访问这些域名所获取到的可是实打实的一些网站~