Skip to main content

Self-Hosting Ente (Photos, Auth & Locker) via Docker Compose

1. Overview

Ente is an end-to-end encrypted alternative to Google Photos and Authy. While it is fully open-source, self-hosting it requires precise configuration regarding cryptographic keys, time synchronisation, and object storage networking.

This guide documents the successful deployment method after resolving critical issues with OTP failures, internal server errors (500), and mobile upload failures.

2. Prerequisites

  • A VPS with Docker and Docker Compose installed.
  • A dedicated domain/subdomain pointing to the VPS.
  • An SMTP Email account (Gmail or Hostinger) for sending OTPs.
  • Crucial: Root/Sudo access to the host machine.

3. Directory Structure

Create a dedicated directory to ensure data persistence.

mkdir -p /opt/ente/data
cd /opt/ente

4. Key Generation (Do Not Skip)

The server requires three cryptographic keys. They must be exactly 32 bytes. If they are 33 or 34 bytes, the server will panic (Error 500).

Run this command 3 times to generate your keys:

openssl rand -base64 32

Save these strings for the configuration step.


5. Configuration Files (The "Golden" Configs)

These configurations include fixes for Timezone Drift, SMTP SSL, and Public Upload Access.

A. museum.yaml

Create this file in /opt/ente/museum.yaml.

Critical Adjustments Made:

  1. SMTP: Uses Port 465 (SSL) instead of 587 (STARTTLS) to prevent handshake errors.
  2. S3 Endpoint: Uses the Public IP of the VPS. If you use http://minio:9000, the mobile app will fail to upload because it cannot resolve the internal Docker container name.
http:
  port: 8080
  use_https: false # Let Nginx/YunoHost handle SSL

db:
  host: postgres
  port: 5432
  name: ente_db
  user: pguser
  password: pgpass
  sslmode: disable

smtp:
  host: "smtp.gmail.com" # Or your provider
  port: 465
  username: "your-email@gmail.com"
  password: "your-app-password"
  encryption: "ssl" # SSL is more reliable than TLS for Docker
  email: "your-email@gmail.com"
  sender_name: "My Ente Cloud"

s3:
  are_local_buckets: true
  b2-eu-cen:
    key: minioadmin
    secret: minioadmin
    # IMPORTANT: Must be your VPS Public IP, not 'localhost' or 'minio'
    endpoint: http://YOUR_VPS_IP:9000
    region: eu-central-1
    bucket: ente-photos
  wasabi-eu-central-2-v3:
    key: minioadmin
    secret: minioadmin
    # IMPORTANT: Must be your VPS Public IP
    endpoint: http://YOUR_VPS_IP:9000
    region: eu-central-1
    bucket: ente-videos

# Paste your 32-byte OpenSSL keys here
jwt:
  secret: "PASTE_KEY_1_HERE"
key:
  encryption: "PASTE_KEY_2_HERE"
  hash: "PASTE_KEY_3_HERE"

internal:
  admins: []

B. docker-compose.yaml

Create this file in /opt/ente/docker-compose.yaml.

Critical Adjustments Made:

  1. Timezone Hard-Mount: We map /usr/share/zoneinfo/... directly. Without this, OTP codes will fail due to a 30-second drift between the host (Malaysia Time) and Container (UTC).
  2. MinIO Ports: We expose port 9000 to the host so the mobile app can reach the storage buckets.
services:
  # The Ente Server (API)
  museum:
    image: ghcr.io/ente-io/server:latest
    container_name: ente_museum
    restart: unless-stopped
    ports:
      - "54752:8080" # Maps internal 8080 to host 54752
    volumes:
      - ./museum.yaml:/museum.yaml:ro
      - ./data/logs:/var/logs
      - ./data/museum-data:/var/lib/museum
      # HARD-MOUNT TIMEZONE (Change to your local zone)
      - /usr/share/zoneinfo/Asia/Kuala_Lumpur:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
    depends_on:
      postgres:
        condition: service_healthy
      minio:
        condition: service_started
    environment:
      - ENTE_DB_PASSWORD=pgpass
      - TZ=Asia/Kuala_Lumpur

  # Database
  postgres:
    image: postgres:16
    container_name: ente_postgres
    restart: unless-stopped
    environment:
      POSTGRES_USER: pguser
      POSTGRES_PASSWORD: pgpass
      POSTGRES_DB: ente_db
      TZ: Asia/Kuala_Lumpur
    volumes:
      - ./data/postgres:/var/lib/postgresql/data
      # Sync DB time to Host time
      - /usr/share/zoneinfo/Asia/Kuala_Lumpur:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U pguser -d ente_db"]
      interval: 5s
      timeout: 5s
      retries: 5

  # Object Storage (MinIO)
  minio:
    image: minio/minio
    container_name: ente_minio
    restart: unless-stopped
    ports:
      - "9000:9000" # Expose S3 port for Mobile Uploads
    command: server /data --console-address ":9001"
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
    volumes:
      - ./data/minio:/data
      - /usr/share/zoneinfo/Asia/Kuala_Lumpur:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro

  # Automatic Bucket Provisioner
  minio-provision:
    image: minio/minio
    depends_on:
      - minio
    entrypoint: >
      /bin/sh -c "
      until mc alias set local http://minio:9000 minioadmin minioadmin; do echo 'Waiting for MinIO...'; sleep 1; done;
      mc mb local/ente-photos;
      mc mb local/ente-videos;
      exit 0;
      "


6. Reverse Proxy Configuration (Nginx)

When setting up the reverse proxy (via YunoHost or standard Nginx), the following blocks are required to prevent upload failures for large video files.

Destination: http://127.0.0.1:54752

Nginx Config additions:

# Allow infinite upload size (prevents 413 Entity Too Large)
client_max_body_size 0;

# Streaming uploads (prevents buffering to RAM)
proxy_request_buffering off;
proxy_buffering off;

# Long timeouts for slow connections
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;


7. The "Nuclear" Deployment Process

If you change keys, timezones, or endpoints after the first run, the database will corrupt or panic (Error 500). You must wipe the data to restart cleanly.

  1. Stop Containers: docker compose down
  2. Wipe Data (The Nuclear Option): sudo rm -rf ./data
  3. Deploy: docker compose up -d
  4. Wait: Give it 30 seconds. Check logs for "We have lift-off". docker compose logs -f museum

8. Client Connection (The "7 Taps" Secret)

  1. Open Ente Auth or Ente Photos.
  2. Tap the Ente logo on the welcome screen 7 times.
  3. Enable "Custom Endpoint" and enter your domain: https://ente.yourdomain.com
  4. Enter email and tap "Send" ONCE.
  • Note: If you tap twice, or if your server time is drifting, you will get "Incorrect Code".

9. Troubleshooting & Lessons Learnt

A. "Incorrect Code" (401 Error)

  • Cause: The VPS timezone and the Docker Container timezone were drifted by 30 seconds. Docker defaults to UTC even if the host is MYT.
  • Fix: Hard-mounting /usr/share/zoneinfo/Asia/Kuala_Lumpur in docker-compose.yaml forces them to sync exactly.

B. "Internal Server Error" (500 Error) during Signup

  • Cause: We used keys that were 33 or 34 bytes long. Ente panics if keys are not exactly 32 bytes.
  • Fix: Ensure openssl rand -base64 32 is used and pasted cleanly without hidden characters.

C. Uploads Failed (Stuck at 0%)

  • Cause: The museum.yaml endpoint was set to http://minio:9000. The mobile phone cannot resolve "minio".
  • Fix: Change endpoint to Public IP and expose Port 9000 in Docker. Open Port 9000 in the firewall (sudo yunohost firewall allow TCP 9000).

10. Post-Install: Increasing Storage

By default, self-hosted accounts get 10GB. Run this command to inject a bonus (e.g., set total to 24GB or 32GB).

Command (Run inside VPS):

docker exec -it ente_postgres psql -U pguser -d ente_db -c "INSERT INTO storage_bonus (bonus_id, user_id, storage, type, valid_till) VALUES ('upgrade-manual', (SELECT user_id FROM users LIMIT 1), 15032385536, 'ADD_ON_SUPPORT', 4102444800000000);"

  • 15032385536 bytes = 14GB bonus (Total 24GB).
  • 34359738368 bytes = 32GB bonus (Total 42GB).
  • 4102444800000000 = Expiry year 2100.

Here is the Edit 1 extension for your knowledge base. You can append this directly to the end of the previous article.


Edit 1: Post-Installation Security & Network Stability

After successfully deploying the instance and verifying the upload pipeline, two critical post-installation steps were performed to secure the server and handle common environment-specific network errors.

1. Security Hardening: Disabling Public Registration

Once the primary admin account has been created, it is vital to close the registration loophole. Leaving it open allows any unauthorised user who discovers the URL to create an account on your private server.

Steps to Disable:

  1. Open the configuration file:
nano /opt/ente/museum.yaml

  1. Locate the internal: block at the bottom of the file.
  2. Add the disable-registration flag:
internal:
  admins: []
  disable-registration: true  # <--- Disables new sign-ups

  1. Restart the service to apply changes:
docker compose restart museum

Result: Existing sessions remain active, but new attempts to sign up via the mobile app or web interface will be rejected by the server.

2. Troubleshooting: Docker vs. Firewall Race Conditions

On systems with strict firewall management (such as YunoHost or managed VPS environments), restarting containers may occasionally trigger a fatal network error.

The Error:

Error response from daemon: driver failed programming external connectivity on endpoint ente_museum ... (iptables failed: iptables --wait -t nat -A DOCKER ... No chain/target/match by that name.)

The Cause: This is a "race condition". Docker attempts to write a NAT rule to forward ports (e.g., 54752 to 8080), but the system firewall (YunoHost) temporarily locks or flushes the iptables chains during the service restart, causing Docker to lose its place.

The Solution:

  1. Immediate Fix: Simply run the docker compose restart command a second time.
  • Observation: In our deployment, the second attempt succeeded immediately (Restarting ... 0.3s).
  1. Nuclear Fix: If the error persists after multiple retries, reset the Docker daemon's connection to the network stack:
systemctl restart docker

Verdict: This error is usually transient and does not indicate a broken configuration if a subsequent retry succeeds.


Edit 2: Resolving DNS Loops & Hardening Network Stability

Following the initial deployment, a critical "DNS Loop" error was observed after a Docker service restart. This caused the application to enter a restart loop with panic errors. This section details the resolution and the permanent fix.

1. The "Lookup Postgres" Panic

Symptoms: The containers appear to start, but the museum logs show a repeating panic error:

panic: dial tcp: lookup postgres on 127.0.0.1:53: read: connection refused

The Cause: This is a Docker/Host DNS conflict.

  1. Docker containers usually inherit the Host's DNS settings.
  2. If the Host (VPS) uses a local resolver like dnsmasq (listening on 127.0.0.1), the container tries to query its own 127.0.0.1.
  3. Since the container has no DNS server running inside it, the connection is refused, and the application crashes because it cannot find the database hostname.

The Immediate Fix (Network Reset): If this error occurs, a simple restart is often insufficient. You must prune the corrupted network bridge:

# 1. Stop everything
docker compose down

# 2. Delete the broken network bridge (Crucial)
docker network prune -f

# 3. Restart the Docker Daemon to reload firewall rules
systemctl restart docker

# 4. Rebuild the stack
docker compose up -d --force-recreate

2. Permanent Prevention: Hardening daemon.json

To prevent this from recurring during system updates or reboots, we must force Docker to use external DNS servers (bypassing the Host's local resolver) and enable "Live Restore".

Configuration: Edit or create the Docker daemon config file:

nano /etc/docker/daemon.json

Add (or merge) the following settings:

{
  "dns": ["8.8.8.8", "1.1.1.1"],
  "live-restore": true
}

  • dns: Forces containers to use Google/Cloudflare DNS directly, ensuring they can always resolve internal and external names regardless of the Host's state.
  • live-restore: Ensures that containers remain running even if the Docker Daemon (background service) is restarted for updates. This significantly increases uptime.

Apply changes:

systemctl reload docker

3. False Positives (Normal Logs)

When monitoring logs (docker compose logs -f), the following behaviours are normal and do not indicate a failure:

  • MinIO Provisioner Exiting: ente-minio-provision-1 exited with code 0

  • Explanation: This container runs a script to create buckets and then stops immediately. It is not supposed to stay running.

  • 404 Warnings on Root (/): WARN ... urlSanitizer Unknown API: /

  • Explanation: The Ente Server is a mobile API, not a web server. Browsing to the root URL via a web browser will result in a 404. This is expected behaviour.