Spinning up a nostr relay in 15 minutes

The goal

We would like to spin up our own nostr relay at wss://nostrrelay.com. We will be using the nostrrelay.com domain throughout this post, so replace accordingly with your desired domain.

It took a total of 15 minutes following this guide to make a new nostr relay available at wss://nostrrelay.com.

Disclaimer: This will get us a working relay, to be production ready we will probably need some tweaking of the config.toml - but it is a great starting point.

Getting a VPS

We are using a Hetzner CPX11 machine for about 5 bucks in this example. We are using an Ubuntu 22.04 image - but most linux distributions should be able do the job.

Adapting our domain name configuration

We now set the A-Record for the domain nostrrelay.com to point to the IP of our VPS.

Preparations on the machine

Updating and upgrading the packages

apt-get update -y
apt-get upgrade -y

Installing required packages

We know need to install docker (and docker-compose) since we will run the relay as well as the nginx webserver that will serve the websocket inside docker containers for easy maintenance.

We also need to install certbot to issue SSL certificates.

apt-get install -y docker-compose docker certbot

Configuring nginx as reverse proxy

We know create the files and folders to spin up nginx as reverse proxy for our relay.

mkdir -p ~/relay/config/nginx
mkdir -p ~/relay/config/relay
touch ~/relay/config/nginx/nginx.conf
mkdir -p ~/relay/data/www/letsencrypt
mkdir -p ~/relay/data/relay
chmod -R 777 ~/relay/data/relay

We now need to initialize the ~/relay/config/nginx/nginx.conf with the basic configuration.

# /etc/nginx/nginx.conf  

events {}

http {
    include /etc/nginx/mime.types;

    proxy_read_timeout 300;
    proxy_connect_timeout 300;
    proxy_send_timeout 300;

    server {
        listen 80 default_server;
        server_name nostrrelay.com;
        # we define the root directory
        root /var/www/letsencrypt/;

        # this is the folder where letsencrypt stores a file
        # to see if we really own the domain
        # we need to make it accessible
        location /.well-known/acme-challenge/ {

Configuring our nostr relay

We initialize the nostr relay configuration with a configuration file at ~/relay/config/relay/config.toml. For now, we simply changed the domain name and the administrative pubkey as well as an administrator email.

There is also a ton of fairly well documented additional configuration options that we are not touching at this point.

# Nostr-rs-relay configuration

# The advertised URL for the Nostr websocket.
relay_url = "wss://nostrrelay.com/"

# Relay information for clients.  Put your unique server name here.
name = "nostrrelay.com"

# Description
description = "Another nostr relay"

# Administrative contact pubkey
pubkey = "38261574a558f6c6f47279540c0e2a1414513d39a9fc9b2fa7c95ab824e913b4"

# Administrative contact URI
contact = "mailto:christopher@zechendorf.com"

# Favicon location.  Relative to the current directory.  Assumes an
# ICO format.
#favicon = "favicon.ico"

# Enable tokio tracing (for use with tokio-console)
#tracing = false

# Database engine (sqlite/postgres).  Defaults to sqlite.
# Support for postgres is currently experimental.
#engine = "sqlite"

# Directory for SQLite files.  Defaults to the current directory.  Can
# also be specified (and overriden) with the "--db dirname" command
# line option.
#data_directory = "."

# Use an in-memory database instead of 'nostr.db'.
# Requires sqlite engine.
# Caution; this will not survive a process restart!
#in_memory = false

# Database connection pool settings for subscribers:

# Minimum number of SQLite reader connections
min_conn = 3

# Maximum number of SQLite reader connections.  Recommend setting this
# to approx the number of cores.
max_conn = 3

# Database connection string.  Required for postgres; not used for
# sqlite.
#connection = "postgresql://postgres:nostr@localhost:7500/nostr"

# gRPC interfaces for externalized decisions and other extensions to
# functionality.
# Events can be authorized through an external service, by providing
# the URL below.  In the event the server is not accessible, events
# will be permitted.  The protobuf3 schema used is available in
# `proto/nauthz.proto`.
# event_admission_server = "http://[::1]:50051"

# Bind to this network address
address = ""

# Listen on this port
port = 8080

# If present, read this HTTP header for logging client IP addresses.
# Examples for common proxies, cloudflare:
#remote_ip_header = "x-forwarded-for"
#remote_ip_header = "cf-connecting-ip"

# Websocket ping interval in seconds, defaults to 5 minutes
#ping_interval = 300

# Reject events that have timestamps greater than this many seconds in
# the future.  Recommended to reject anything greater than 30 minutes
# from the current time, but the default is to allow any date.
reject_future_seconds = 1800

# Limit events created per second, averaged over one minute.  Must be
# an integer.  If not set (or set to 0), there is no limit.  Note:
# this is for the server as a whole, not per-connection.
# Limiting event creation is highly recommended if your relay is
# public!
#messages_per_sec = 5

# Limit client subscriptions created, averaged over one minute.  Must
# be an integer.  If not set (or set to 0), defaults to unlimited.
# Strongly recommended to set this to a low value such as 10 to ensure
# fair service.
#subscriptions_per_min = 0

# Limit how many concurrent database connections a client can have.
# This prevents a single client from starting too many expensive
# database queries.  Must be an integer.  If not set (or set to 0),
# defaults to unlimited (subject to subscription limits).
#db_conns_per_client = 0

# Limit blocking threads used for database connections.  Defaults to 16.
#max_blocking_threads = 16

# Limit the maximum size of an EVENT message.  Defaults to 128 KB.
# Set to 0 for unlimited.
#max_event_bytes = 131072

# Maximum WebSocket message in bytes.  Defaults to 128 KB.
#max_ws_message_bytes = 131072

# Maximum WebSocket frame size in bytes.  Defaults to 128 KB.
#max_ws_frame_bytes = 131072

# Broadcast buffer size, in number of events.  This prevents slow
# readers from consuming memory.
#broadcast_buffer = 16384

# Event persistence buffer size, in number of events.  This provides
# backpressure to senders if writes are slow.
#event_persist_buffer = 4096

# Event kind blacklist. Events with these kinds will be discarded.
#event_kind_blacklist = [
#    70202,

# Event kind allowlist. Events other than these kinds will be discarded.
#event_kind_allowlist = [
#    0, 1, 2, 3, 7, 40, 41, 42, 43, 44, 30023,

# Pubkey addresses in this array are whitelisted for event publishing.
# Only valid events by these authors will be accepted, if the variable
# is set.
#pubkey_whitelist = [
#  "35d26e4690cbe1a898af61cc3515661eb5fa763b57bd0b42e45099c8b32fd50f",
#  "887645fef0ce0c3c1218d2f5d8e6132a19304cdc57cd20281d082f38cfea0072",
# Enable NIP-42 authentication
#nip42_auth = false
# Send DMs events (kind 4) only to their authenticated recipients
#nip42_dms = false

# NIP-05 verification of users.  Can be "enabled" to require NIP-05
# metadata for event authors, "passive" to perform validation but
# never block publishing, or "disabled" to do nothing.
#mode = "disabled"

# Domain names that will be prevented from publishing events.
#domain_blacklist = ["wellorder.net"]

# Domain names that are allowed to publish events.  If defined, only
# events NIP-05 verified authors at these domains are persisted.
#domain_whitelist = ["example.com"]

# Consider an pubkey "verified" if we have a successful validation
# from the NIP-05 domain within this amount of time.  Note, if the
# domain provides a successful response that omits the account,
# verification is immediately revoked.
#verify_expiration = "1 week"

# How long to wait between verification attempts for a specific author.
#verify_update_frequency = "24 hours"

# How many consecutive failed checks before we give up on verifying
# this author.
#max_consecutive_failures = 20

# Enable pay to relay
#enabled = false

# The cost to be admitted to relay
#admission_cost = 4200

# The cost in sats per post
#cost_per_event = 0

# Url of lnbits api
#node_url = "<node url>"

# LNBits api secret
#api_secret = "<ln bits api>"

# Terms of service
#terms_message = """
#This service (and supporting services) are provided "as is", without warranty of any kind, express or implied.
#By using this service, you agree:
#* Not to engage in spam or abuse the relay service
#* Not to disseminate illegal content
#* That requests to delete content cannot be guaranteed
#* To use the service in compliance with all applicable laws
#* To grant necessary rights to your content for unlimited time
#* To be of legal age and have capacity to use this service
#* That the service may be terminated at any time without notice
#* That the content you publish may be removed at any time without notice
#* To have your IP address collected to detect abuse or misuse
#* To cooperate with the relay to combat abuse or misuse
#* You may be exposed to content that you might find triggering or distasteful
#* The relay operator is not liable for content produced by users of the relay

# Whether or not new sign ups should be allowed
#sign_ups = false
#secret_key = "<nostr nsec>"

Configuring and running docker compose

We now need to configure ~/relay/docker-compose.yml to spin up our nginx reverse proxy as well as the nostr relay.

version: "3"
    container_name: nginx
    image: nginx:stable-alpine
    restart: unless-stopped
      - ./config/nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./data/www:/var/www
      - /etc/letsencrypt:/certs
      - "80:80"
      - "443:443"
    container_name: relay
    image: scsibug/nostr-rs-relay:0.8.8
    restart: unless-stopped
      - ./data/relay:/usr/src/app/db:Z
      - ./config/relay/config.toml:/usr/src/app/config.toml 

Now we are ready to spin up our containers:

cd ~/relay
docker-compose up --build -d

IMPORTANT: The relay will not be publicly accessible yet, just the nginx webserver.

Acquiring SSL Certificates

In order to allow for secure connections via SSL, we need valid certificates, which we can get from letsencrypt by executing the following command (make sure to use your domain):

letsencrypt certonly --webroot -w ~/relay/data/www/letsencrypt -d nostrrelay.com

After letsencrypt has done it’s voodoo we should get a message like:

Successfully received certificate.

Finalizing the configuration of the nginx reverse proxy

Now that we have certificates we can finally adapt the ~/relay/config/nginx/nginx.conf. The completed file looks like this:

# /etc/nginx/nginx.conf  

events {}

http {
    include /etc/nginx/mime.types;

    proxy_read_timeout 300;
    proxy_connect_timeout 300;
    proxy_send_timeout 300;

    server {
        listen 80 default_server;
        server_name nostrrelay.com;
        # we define the root directory
        root /var/www/letsencrypt/;

        # this is the folder where letsencrypt stores a file
        # to see if we really own the domain
        # we need to make it accessible
        location /.well-known/acme-challenge/ {

    server {
        server_name nostrrelay.com;
        listen 443 ssl;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_certificate /certs/live/nostrrelay.com/fullchain.pem;
        ssl_certificate_key /certs/live/nostrrelay.com/privkey.pem;
        location / {
            proxy_pass http://relay:8080;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "Upgrade";
            proxy_set_header Host $host;

Make your relay publicly accessible

To make the relay publicly accessible we need to reload the udpated nginx configuration.

docker exec nginx nginx -s reload

Alternatively, you can also stop and restart your containers:

cd ~/relay
docker-compose down
docker-compose up --build -d

Final touches

Even though certbot will automatically renew your certificates as needed, nginx won’t automatically load the updated certificates (so your wss connections won’t be secure after 90 days when the certificates expire). To remedy this, we need to add the following line to our crontab by executing crontab -e:

23 3 * * 1 docker exec nginx nginx -s reload

This will make the nginx webserver reload it’s configuration once a week at 03:23 in the morning.

What now?

Check if it’s up and running

We can check out if our nostr relay is up and running by opening https://websocketking.com. We enter our relay’s URL wss://nostrrelay.com and click connect. The log should read something like this:

10:21 02.80 Connected to wss://nostrrelay.com
10:21 02.58 Connecting to wss://nostrrelay.com

After the connection is established you can request all notes from the relay (it will spit out new notes as they arrive):

["REQ","notes", {"kinds":[1]}]

Check the nginx or relay logs

We can use dockers log functionality to check out the logs of both nginx or the relay:

docker container logs -n 100 -f nginx


docker container logs -n 100 -f relay

Check the database

IMPORANT: We’re quering the live database, so we need to be very shure what we’re doing.

If we’re curious about the database, we can go into the relay container and query the sqlite database:

docker exec -it relay bash
cd /usr/src/app/db
sqlite3 nostr.db

We can then execute arbitrary queries - for example this one, to get the total count of events:

select count(*) from event;