Self-Hosting Uptime Ping: Zero-Dependency Monitoring Tool
You’ve got a handful of self-hosted services running. Nextcloud, Gitea, maybe a media server. How do you know when one goes down?
Most people reach for Uptime Kuma — and it’s great. But sometimes you want something simpler. No Node.js runtime, no database, no dashboard you’ll never look at. Just ping your services, tell you when something’s down, and get out of the way.
That’s what a zero-dependency uptime ping setup gives you. In this guide, we’ll build a lightweight monitoring solution using nothing but a shell script and Docker — or skip Docker entirely if you prefer.
Why Zero-Dependency Monitoring?
Full monitoring stacks like Grafana + Prometheus or even Uptime Kuma come with overhead:
- Resource usage — Node.js, databases, web servers all consuming RAM
- Maintenance burden — Another service that can itself go down
- Complexity — More moving parts = more things to break
A simple ping monitor uses virtually no resources, runs from cron, and alerts you through whatever channel you already use.
What We’re Building
A monitoring setup that:
- Pings a list of URLs/services on a schedule
- Checks HTTP status codes and response times
- Sends alerts when a service goes down
- Sends recovery notifications when it comes back
- Tracks state to avoid alert spam
- Runs from a single shell script or minimal Docker container
Prerequisites
- A Linux server (any distro)
curl(installed on virtually everything)- A notification method (email, Telegram bot, Discord webhook, or Gotify)
Option 1: Pure Shell Script
This is the simplest approach — a bash script that runs from cron.
Create the Monitor Script
sudo mkdir -p /opt/uptime-ping
sudo nano /opt/uptime-ping/monitor.sh
#!/usr/bin/env bash
# uptime-ping — Lightweight service monitor
set -euo pipefail
CONFIG_DIR="/opt/uptime-ping"
STATE_DIR="$CONFIG_DIR/state"
TIMEOUT=10
mkdir -p "$STATE_DIR"
# ── Notification Settings ──
# Telegram (recommended)
TELEGRAM_BOT_TOKEN="${TELEGRAM_BOT_TOKEN:-}"
TELEGRAM_CHAT_ID="${TELEGRAM_CHAT_ID:-}"
# Discord webhook (alternative)
DISCORD_WEBHOOK="${DISCORD_WEBHOOK:-}"
# ── Services to Monitor ──
# Format: NAME|URL|EXPECTED_STATUS
SERVICES=(
"Nextcloud|https://cloud.example.com|200"
"Gitea|https://git.example.com|200"
"Traefik|https://traefik.example.com/dashboard/|200"
"Home Assistant|http://192.168.1.50:8123|200"
"Jellyfin|http://192.168.1.50:8096|200"
)
# ── Alert Functions ──
send_alert() {
local message="$1"
if [[ -n "$TELEGRAM_BOT_TOKEN" ]]; then
curl -s -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
-d "chat_id=${TELEGRAM_CHAT_ID}" \
-d "text=${message}" \
-d "parse_mode=HTML" >/dev/null 2>&1
fi
if [[ -n "$DISCORD_WEBHOOK" ]]; then
curl -s -X POST "$DISCORD_WEBHOOK" \
-H "Content-Type: application/json" \
-d "{\"content\": \"${message}\"}" >/dev/null 2>&1
fi
# Always log
echo "$(date '+%Y-%m-%d %H:%M:%S') $message" >> "$CONFIG_DIR/monitor.log"
}
# ── Check Services ──
for entry in "${SERVICES[@]}"; do
IFS='|' read -r name url expected <<< "$entry"
state_file="$STATE_DIR/$(echo "$name" | tr ' ' '_').state"
# Ping the service
http_code=$(curl -s -o /dev/null -w "%{http_code}" --max-time "$TIMEOUT" "$url" 2>/dev/null || echo "000")
response_time=$(curl -s -o /dev/null -w "%{time_total}" --max-time "$TIMEOUT" "$url" 2>/dev/null || echo "0")
previous_state="up"
[[ -f "$state_file" ]] && previous_state=$(cat "$state_file")
if [[ "$http_code" == "$expected" ]]; then
# Service is up
echo "up" > "$state_file"
if [[ "$previous_state" == "down" ]]; then
send_alert "✅ <b>${name}</b> is back UP (${response_time}s)"
fi
else
# Service is down
echo "down" > "$state_file"
if [[ "$previous_state" != "down" ]]; then
send_alert "🔴 <b>${name}</b> is DOWN (HTTP ${http_code})"
fi
fi
done
Make It Executable
chmod +x /opt/uptime-ping/monitor.sh
Configure Notifications
For Telegram alerts, create a bot via @BotFather and get your chat ID:
export TELEGRAM_BOT_TOKEN="123456:ABC-DEF"
export TELEGRAM_CHAT_ID="987654321"
For Discord, create a webhook in your server settings and set:
export DISCORD_WEBHOOK="https://discord.com/api/webhooks/..."
Add to Cron
Check every 2 minutes:
crontab -e
That’s it. Your services are now monitored.
Test It
# Run manually
TELEGRAM_BOT_TOKEN="your-token" TELEGRAM_CHAT_ID="your-id" /opt/uptime-ping/monitor.sh
# Check the log
cat /opt/uptime-ping/monitor.log
# Simulate a failure (change a URL to something broken, run again)
Option 2: Docker Container
If you prefer Docker (maybe for portability or to run alongside your other stacks):
Create the Docker Setup
mkdir -p ~/uptime-ping
cd ~/uptime-ping
docker-compose.yml:
services:
uptime-ping:
image: alpine:latest
container_name: uptime-ping
restart: unless-stopped
volumes:
- ./monitor.sh:/monitor.sh:ro
- ./state:/state
- ./monitor.log:/monitor.log
environment:
- TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN}
- TELEGRAM_CHAT_ID=${TELEGRAM_CHAT_ID}
- DISCORD_WEBHOOK=${DISCORD_WEBHOOK:-}
- CHECK_INTERVAL=${CHECK_INTERVAL:-120}
entrypoint: /bin/sh
command: >
-c "apk add --no-cache curl bash > /dev/null 2>&1 &&
while true; do
bash /monitor.sh;
sleep $${CHECK_INTERVAL};
done"
.env:
Copy the monitor script from Option 1, adjusting CONFIG_DIR and STATE_DIR:
CONFIG_DIR="/"; STATE_DIR="/state"
Start It
docker compose up -d
docker logs -f uptime-ping
Adding TCP Port Checks
Not everything is HTTP. For services like databases or MQTT brokers, add TCP checks:
# Add to your SERVICES array with "tcp" prefix
TCP_SERVICES=(
"PostgreSQL|192.168.1.50|5432"
"MQTT|192.168.1.50|1883"
"SSH|192.168.1.50|22"
)
for entry in "${TCP_SERVICES[@]}"; do
IFS='|' read -r name host port <<< "$entry"
state_file="$STATE_DIR/$(echo "$name" | tr ' ' '_').state"
previous_state="up"
[[ -f "$state_file" ]] && previous_state=$(cat "$state_file")
if timeout "$TIMEOUT" bash -c "echo >/dev/tcp/$host/$port" 2>/dev/null; then
echo "up" > "$state_file"
[[ "$previous_state" == "down" ]] && send_alert "✅ <b>${name}</b> (:${port}) is back UP"
else
echo "down" > "$state_file"
[[ "$previous_state" != "down" ]] && send_alert "🔴 <b>${name}</b> (:${port}) is DOWN"
fi
done
Adding Response Time Alerts
Slow services can be just as bad as down services:
# Add after the HTTP check
slow_threshold="3.0" # seconds
if (( $(echo "$response_time > $slow_threshold" | bc -l 2>/dev/null || echo 0) )); then
send_alert "🟡 <b>${name}</b> is SLOW (${response_time}s)"
fi
Daily Summary Report
Add a separate cron job for a daily summary:
#!/usr/bin/env bash
# daily-report.sh — Run once a day
STATE_DIR="/opt/uptime-ping/state"
report="📊 <b>Daily Uptime Report</b>\n"
all_up=true
for state_file in "$STATE_DIR"/*.state; do
name=$(basename "$state_file" .state | tr '_' ' ')
state=$(cat "$state_file")
if [[ "$state" == "up" ]]; then
report+="✅ $name\n"
else
report+="🔴 $name\n"
all_up=false
fi
done
[[ "$all_up" == true ]] && report+="\n✅ All services operational"
send_alert "$report"
Comparison with Full Monitoring Tools
| Feature | Uptime Ping | Uptime Kuma | Grafana+Prometheus |
|---|---|---|---|
| RAM usage | ~0 MB | ~150 MB | ~500 MB+ |
| Dependencies | curl, bash | Node.js | Multiple services |
| Web dashboard | ❌ | ✅ | ✅ |
| Status page | ❌ | ✅ | ❌ (needs plugin) |
| Setup time | 5 minutes | 10 minutes | 30+ minutes |
| Alert channels | Any (scriptable) | Built-in | Alertmanager |
| Maintenance | Zero | Low | Medium |
When to use uptime-ping: You want dead-simple monitoring, don’t need a dashboard, and value minimal resource usage.
When to use Uptime Kuma: You want a web UI, status pages, or more sophisticated check types.
When to use Grafana: You need metrics, graphs, historical data, and complex alerting rules.
Troubleshooting
Alerts Not Sending
# Test Telegram directly
curl -s "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
-d "chat_id=${TELEGRAM_CHAT_ID}" \
-d "text=Test alert"
False Positives
If you’re getting alerts for services that are actually up:
- Increase
TIMEOUT(some services are slow to respond) - Check if the service requires authentication (might return 401 instead of 200)
- Make sure you’re checking the right URL (some apps redirect)
Cron Not Running
# Check cron is active
systemctl status cron
# Check cron logs
grep CRON /var/log/syslog | tail -20
State Files Stale
# Reset all state (will re-trigger alerts)
rm /opt/uptime-ping/state/*.state
Going Further
- Multiple servers: Run the script on a separate machine from what it monitors
- Heartbeat monitoring: Flip the model — have services ping you (dead man’s switch)
- SSL monitoring: Add certificate expiry checks with
openssl s_client - Pair with selfhost-doctor: Deep health checks on a schedule
Conclusion
You don’t need a complex monitoring stack to know when your services are down. A 50-line shell script, a cron job, and a Telegram bot give you 90% of what most self-hosters need.
Start simple. If you outgrow it, upgrade to Uptime Kuma — but you might find that a zero-dependency ping monitor is all you ever needed.
More self-hosting guides at selfhostsetup.com. Check your server health with selfhost-doctor.