Total Pageviews

Monday 21 December 2015

Route Traffic through a Tor Docker container

This blog is post is going to explain how to route traffic on your host through a Tor Docker container.
It’s actually a lot simplier than you would think. But it involves dealing with some unsavory things such as iptables.

Run the Image

I have a fork of the tor source code and a branch with a Dockerfile. I have submitted upstream… we will see if they take it. The final result is the image jess/tor, but you can easily build locally from my repo jfrazelle/tor.
So let’s run the image:
$ docker run -d --net host --restart always --name tor jess/tor
Easy right? I can already hear the haters, “blah blah blah net host”. Chill out, the point is to route all our traffic duhhhh so we may as well, otherwise would need to change / overwrite some of Docker’s iptables rules, and really who has time for that shit…
You do? Ok make a PR to this blog post.

Routing Traffic

Contain yourselves, I am about to throw down some sick iptables rules.
#!/bin/bash
# Most of this is credited to
# https://trac.torproject.org/projects/tor/wiki/doc/TransparentProxy
# With a few minor edits

# to run iptables commands you need to be root
if [ "$EUID" -ne 0 ]; then
    echo "Please run as root."
    return 1
fi

### set variables
# destinations you don't want routed through Tor
_non_tor="192.168.1.0/24 192.168.0.0/24"

# get the UID that Tor runs as
_tor_uid=$(docker exec -u tor tor id -u)

# Tor's TransPort
_trans_port="9040"
_dns_port="5353"

### set iptables *nat
iptables -t nat -A OUTPUT -m owner --uid-owner $_tor_uid -j RETURN
iptables -t nat -A OUTPUT -p udp --dport 53 -j REDIRECT --to-ports $_dns_port

# allow clearnet access for hosts in $_non_tor
for _clearnet in $_non_tor 127.0.0.0/9 127.128.0.0/10; do
   iptables -t nat -A OUTPUT -d $_clearnet -j RETURN
done

# redirect all other output to Tor's TransPort
iptables -t nat -A OUTPUT -p tcp --syn -j REDIRECT --to-ports $_trans_port

### set iptables *filter
iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# allow clearnet access for hosts in $_non_tor
for _clearnet in $_non_tor 127.0.0.0/8; do
   iptables -A OUTPUT -d $_clearnet -j ACCEPT
done

# allow only Tor output
iptables -A OUTPUT -m owner --uid-owner $_tor_uid -j ACCEPT
iptables -A OUTPUT -j REJECT
Check that we are routing via check.torproject.org.
from https://blog.jessfraz.com/post/routing-traffic-through-tor-docker-container/
-----------

Running a Tor relay with Docker


This post is part two of what will be a three part series. If you missed it part one was How to Route Traffic through a Tor Docker container. I figured it was important, if you are going to be a tor user, to document how you can help the Tor community by hosting a Tor relay. And guess what? You can use Docker to do this!
There are three types of relays you can host, a bridge relay, a middle relay, and an exit relay. Exit relays tend to be the ones recieving take down notices because the IP is the one the public sees traffic from Tor as. A great reference for hosting an exit node can be found here blog.torproject.org/blog/tips-running-exit-node-minimal-harassment. But I will go over how to host each from a Docker container. My example will have a reduced exit policy and limit which ports you are willing to route traffic through.
If you don’t want to host an exit node, host a middle relay instead! And if you want your relay not publically listed in the network then host a bridge.

Creating the base image

I have created a Docker image jess/tor-relay from this Dockerfile. Feel free to create your own image with the following Dockerfile:
FROM alpine:latest

# Note: Tor is only in testing repo
RUN apk update && apk add \
    tor \
    --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ \
    && rm -rf /var/cache/apk/*

# default port to used for incoming Tor connections
# can be changed by changing 'ORPort' in torrc
EXPOSE 9001

# copy in our torrc files
COPY torrc.bridge /etc/tor/torrc.bridge
COPY torrc.middle /etc/tor/torrc.middle
COPY torrc.exit /etc/tor/torrc.exit

# make sure files are owned by tor user
RUN chown -R tor /etc/tor

USER tor

ENTRYPOINT [ "tor" ]
As you can see we are copying 3 different torrc’s into the container. One for each a bridge, middle, and exit relay.
I used alpine linux because it is super minimal. The size of the image is 11.52MB! Crazyyyyyyy!

Running a bridge relay

A bridge relay is not publically listed as part of the Tor network. This is helpful in places that block all the IPs of publically listed Tor relays.
The torrc.bridge file for the bridge relay looks like the following:
ORPort 9001
## A handle for your relay, so people don't have to refer to it by key.
Nickname hacktheplanet
ContactInfo ${CONTACT_GPG_FINGERPRINT} ${CONTACT_NAME} ${CONTACT_EMAIL}
BridgeRelay 1
To run the image for a bridge relay:
$ docker run -d \
    -v /etc/localtime:/etc/localtime \ # so time is synced
    --restart always \ # why not?
    -p 9001:9001 \ # expose/publish the port
    --name tor-relay \
    jess/tor-relay -f /etc/tor/torrc.bridge
And now you are helping the tor network by running a bridge relay! Yayyy \o/

Running a middle relay

A middle relay is one of the first few relays traffic flows through. Traffic will always pass through at least 3 relays. The last relay being an exit node and all relays before that a middle relay.
The torrc.middle file for the middle relay looks like the following:
ORPort 9001
## A handle for your relay, so people don't have to refer to it by key.
Nickname hacktheplanet
ContactInfo ${CONTACT_GPG_FINGERPRINT} ${CONTACT_NAME} ${CONTACT_EMAIL}
ExitPolicy reject *:*
To run the image for a middle relay:
$ docker run -d \
    -v /etc/localtime:/etc/localtime \ # so time is synced
    --restart always \ # why not?
    -p 9001:9001 \ # expose/publish the port
    --name tor-relay \
    jess/tor-relay -f /etc/tor/torrc.middle
And now you are helping the tor network by running a middle relay!

Running an exit relay

The exit relay is the last relay traffic is filtered through.
The torrc.exit file for the exit node looks like the following:
ORPort 9001
## A handle for your relay, so people don't have to refer to it by key.
Nickname hacktheplanet
ContactInfo ${CONTACT_GPG_FINGERPRINT} ${CONTACT_NAME} ${CONTACT_EMAIL}

# Reduced exit policy from
# https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy
ExitPolicy accept *:20-23     # FTP, SSH, telnet
ExitPolicy accept *:43        # WHOIS
ExitPolicy accept *:53        # DNS
ExitPolicy accept *:79-81     # finger, HTTP
ExitPolicy accept *:88        # kerberos
ExitPolicy accept *:110       # POP3
ExitPolicy accept *:143       # IMAP
ExitPolicy accept *:194       # IRC
ExitPolicy accept *:220       # IMAP3
ExitPolicy accept *:389       # LDAP
ExitPolicy accept *:443       # HTTPS
ExitPolicy accept *:464       # kpasswd
ExitPolicy accept *:465       # URD for SSM (more often: an alternative SUBMISSION port, see 587)
ExitPolicy accept *:531       # IRC/AIM
ExitPolicy accept *:543-544   # Kerberos
ExitPolicy accept *:554       # RTSP
ExitPolicy accept *:563       # NNTP over SSL
ExitPolicy accept *:587       # SUBMISSION (authenticated clients [MUA's like Thunderbird] send mail over STARTTLS SMTP here)
ExitPolicy accept *:636       # LDAP over SSL
ExitPolicy accept *:706       # SILC
ExitPolicy accept *:749       # kerberos
ExitPolicy accept *:873       # rsync
ExitPolicy accept *:902-904   # VMware
ExitPolicy accept *:981       # Remote HTTPS management for firewall
ExitPolicy accept *:989-995   # FTP over SSL, Netnews Administration System, telnets, IMAP over SSL, ircs, POP3 over SSL
ExitPolicy accept *:1194      # OpenVPN
ExitPolicy accept *:1220      # QT Server Admin
ExitPolicy accept *:1293      # PKT-KRB-IPSec
ExitPolicy accept *:1500      # VLSI License Manager
ExitPolicy accept *:1533      # Sametime
ExitPolicy accept *:1677      # GroupWise
ExitPolicy accept *:1723      # PPTP
ExitPolicy accept *:1755      # RTSP
ExitPolicy accept *:1863      # MSNP
ExitPolicy accept *:2082      # Infowave Mobility Server
ExitPolicy accept *:2083      # Secure Radius Service (radsec)
ExitPolicy accept *:2086-2087 # GNUnet, ELI
ExitPolicy accept *:2095-2096 # NBX
ExitPolicy accept *:2102-2104 # Zephyr
ExitPolicy accept *:3128      # SQUID
ExitPolicy accept *:3389      # MS WBT
ExitPolicy accept *:3690      # SVN
ExitPolicy accept *:4321      # RWHOIS
ExitPolicy accept *:4643      # Virtuozzo
ExitPolicy accept *:5050      # MMCC
ExitPolicy accept *:5190      # ICQ
ExitPolicy accept *:5222-5223 # XMPP, XMPP over SSL
ExitPolicy accept *:5228      # Android Market
ExitPolicy accept *:5900      # VNC
ExitPolicy accept *:6660-6669 # IRC
ExitPolicy accept *:6679      # IRC SSL
ExitPolicy accept *:6697      # IRC SSL
ExitPolicy accept *:8000      # iRDMI
ExitPolicy accept *:8008      # HTTP alternate
ExitPolicy accept *:8074      # Gadu-Gadu
ExitPolicy accept *:8080      # HTTP Proxies
ExitPolicy accept *:8082      # HTTPS Electrum Bitcoin port
ExitPolicy accept *:8087-8088 # Simplify Media SPP Protocol, Radan HTTP
ExitPolicy accept *:8332-8333 # Bitcoin
ExitPolicy accept *:8443      # PCsync HTTPS
ExitPolicy accept *:8888      # HTTP Proxies, NewsEDGE
ExitPolicy accept *:9418      # git
ExitPolicy accept *:9999      # distinct
ExitPolicy accept *:10000     # Network Data Management Protocol
ExitPolicy accept *:11371     # OpenPGP hkp (http keyserver protocol)
ExitPolicy accept *:19294     # Google Voice TCP
ExitPolicy accept *:19638     # Ensim control panel
ExitPolicy accept *:50002     # Electrum Bitcoin SSL
ExitPolicy accept *:64738     # Mumble
ExitPolicy reject *:*
To run the image for an exit node:
$ docker run -d \
    -v /etc/localtime:/etc/localtime \ # so time is synced
    --restart always \ # why not?
    -p 9001:9001 \ # expose/publish the port
    --name tor-relay \
    jess/tor-relay -f /etc/tor/torrc.exit
And now you are helping the tor network by running an exit relay!
After running for a couple hours, giving time to propogate, you can check atlas.torproject.org to check if your node has successfully registered in the network.
Stay tuned for part three of the series where I go over how to run Docker containers with a Tor networking plugin I am working with Docker’s new networking plugins. But of course if you are going to use the plugin or route all your traffic through a Tor Docker container (from my first post), you should really consider hosting a relay. The more people who run relays, the faster the Tor network will be.
from https://blog.jessfraz.com/post/running-a-tor-relay-with-docker/
----------

Tor Socks Proxy and Privoxy Containers


Okay so this is part 2.5 in my series of posts combining my two favorite things, Docker & Tor. If you are just starting here, to catch you up, the first post was “How to Route all Traffic through a Tor Docker container”. The second was on “Running a Tor relay with Docker”. I thought it only made sense to show how to set up a Tor socks5 proxy in a container, for routing some traffic through Tor; in contrast to the first post, where I explained how to route all your traffic.

Tor Socks5 Proxy

I have made a Docker image for this which lives at jess/tor-proxy on the Docker hub. But I will go over the details so you can build one yourself.
The Dockerfile looks like the following:
FROM alpine:latest

# Note: Tor is only in testing repo -> http://pkgs.alpinelinux.org/packages?package=emacs&repo=all&arch=x86_64
RUN apk update && apk add \
    tor \
    --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ \
    && rm -rf /var/cache/apk/*

# expose socks port
EXPOSE 9050

# copy in our torrc file
COPY torrc.default /etc/tor/torrc.default

# make sure files are owned by tor user
RUN chown -R tor /etc/tor

USER tor

ENTRYPOINT [ "tor" ]
CMD [ "-f", "/etc/tor/torrc.default" ]
Which looks a lot like the Dockerfile for a relay, if you recall. But the key difference is the torrc. Now the only thing I have changed from the defaulttorrc is the following line:
SocksPort 0.0.0.0:9050
This is so that it can bind correctly to the network namespace the container is using.
This image weighs in at only 11.51 MB!
To run the image:
$ docker run -d \
    --restart always \
    -v /etc/localtime:/etc/localtime:ro \ # i like this for all my containers, but it's optional
    -p 9050:9050 \ # publish the port
    --name torproxy \
    jess/tor-proxy
Okay, awesome, now you have the socks5 proxy running on port 9050. Let’s test it:
# get your current ip
$ curl -L http://ifconfig.me

# get your ip through the tor socks proxy
$ curl --socks http://localhost:9050  -L http://ifconfig.me
# obviously they should be different ;)

# you can even curl the check.torproject.org api
$ curl --socks http://localhost:9050  -L https://check.torproject.org/api/ip
If you are like me and use @ioerror’s gpg.conf you can uncomment the line:
keyserver-options http-proxy=socks5-hostname://127.0.0.1:9050
Now you can import and search for keys on a key server with improved anonymity. Obviously there are a bunch of other things you can use the socks proxy for, but I wanted to give this as an example.
Can we take this even further? Yes.

Privoxy HTTP Proxy

The socks proxy is awesome, but if you want to additionally have an http proxy it is super easy!
What we can do is link a Privoxy container to our Tor proxy container.
NOTE: I have seen people have a Tor socks proxy and Privoxy in the same container. But I prefer my approach of 2 different containers, because it is cleaner, maybe sometimes you do not need both, and you completely eliminate the need for having an init system starting 2 processes in one container. Not that there is anything wrong with that, but it is not my personal preference.
So on to the Dockerfile, which also lives at jess/privoxy:
FROM alpine:latest

RUN apk update && apk add \
    privoxy \
    && rm -rf /var/cache/apk/*

# expose http port
EXPOSE 8118

# copy in our privoxy config file
COPY privoxy.conf /etc/privoxy/config

# make sure files are owned by privoxy user
RUN chown -R privoxy /etc/privoxy

USER privoxy

ENTRYPOINT [ "privoxy", "--no-daemon" ]
CMD [ "/etc/privoxy/config" ]
This image is a whopping 6.473 MB :D
The only change I made to the default privoxy config was the following:
forward-socks5 / torproxy:9050 .
This is so that when we link our torproxy container to the privoxy container, privoxy can communicate with the sock.
Let’s run it:
$ docker run -d \
    --restart always \
    -v /etc/localtime:/etc/localtime:ro \ # again a personal preference
    --link torproxy:torproxy \ # link to our torproxy container
    -p 8118:8118 \ # publish the port
    --name privoxy \
    jess/privoxy
Awesome, now to test the proxy:
# get your current ip
$ curl -L http://ifconfig.me

# get your ip through the http proxy
$ curl -x http://localhost:8118 -L http://ifconfig.me
# obviously again, they should be different ;)

# curl the check.torproject.org api
$ curl -x http://localhost:8118  -L https://check.torproject.org/api/ip
That’s all for now! Stay anonymous on the interwebs.
from https://blog.jessfraz.com/post/tor-socks-proxy-and-privoxy-containers/