Scaling up a Mastodon server by adding more Sidekiq workers is a strategic approach to handle increased load effectively. Utilizing systemd's service instances feature, you can start multiple instances of the Sidekiq service each configured for a different queue, which optimizes resource utilization and ensures more efficient processing.
Consult the Official Mastodon Documentation First!
Mastodon's official documentation provides detailed information on various configurations and best practices for scaling your instance. Before making any changes, refer to the Mastodon Scaling Documentation. This resource will help you understand the nuances of Mastodon's architecture and how to optimize Sidekiq workers for different workloads. Each parameter specified in the systemd service file, such as DB_POOL
, MALLOC_ARENA_MAX
, and the Sidekiq -c
(concurrency) option, plays a significant role in the behavior of the workers. Incorrect settings can lead to system instability, performance bottlenecks, or security vulnerabilities.
Step-by-Step Guide to Scale Your Mastodon Server
1. Create a systemd Service File
Start by creating a new systemd service file for the Mastodon Sidekiq workers. Each service instance can be started with unique parameters by using the "@" symbol in the service name.
sudo nano /etc/systemd/system/[email protected]
2. Configure the Service File
Paste the configuration below into your service file. This configuration sets up the environment and specifies how each Sidekiq worker should start. Adjust the parameters such as DB_POOL
or the number of threads per Sidekiq instance (such as -c 100
) as necessary for your setup.
[Unit]
Description=mastodon-sidekiq instance %i
After=network.target
[Service]
Type=simple
User=mastodon
WorkingDirectory=/home/mastodon/live
Environment="RAILS_ENV=production"
Environment="DB_POOL=100"
Environment="MALLOC_ARENA_MAX=2"
Environment="LD_PRELOAD=libjemalloc.so"
ExecStart=/home/mastodon/.rbenv/shims/bundle exec sidekiq -c 100 -q %i
TimeoutSec=15
Restart=always
# Proc filesystem
ProcSubset=pid
ProtectProc=invisible
# Capabilities
CapabilityBoundingSet=
# Security
NoNewPrivileges=true
# Sandboxing
ProtectSystem=strict
PrivateTmp=true
PrivateDevices=true
PrivateUsers=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectControlGroups=true
RestrictAddressFamilies=AF_INET
RestrictAddressFamilies=AF_INET6
RestrictAddressFamilies=AF_NETLINK
RestrictAddressFamilies=AF_UNIX
RestrictNamespaces=true
LockPersonality=true
RestrictRealtime=true
RestrictSUIDSGID=true
RemoveIPC=true
PrivateMounts=true
ProtectClock=true
# System Call Filtering
SystemCallArchitectures=native
SystemCallFilter=~@cpu-emulation @debug @keyring @ipc @mount @obsolete @privileged @setuid
SystemCallFilter=@chown
SystemCallFilter=pipe
SystemCallFilter=pipe2
ReadWritePaths=/home/mastodon/live
[Install]
WantedBy=multi-user.target
In the ExecStart
command, -q %i
tells Sidekiq to work on the queue named after the service instance, allowing you to dedicate specific workers to specific queues such as "default", "push", or "pull".
3. Reload Systemd and Enable the Services
After saving your changes, reload the systemd daemon to recognize the new service file:
sudo systemctl daemon-reload
Now, you can start and enable each Sidekiq worker as a separate service instance:
sudo systemctl start [email protected]
sudo systemctl start [email protected]
sudo systemctl start [email protected]
sudo systemctl enable [email protected]
sudo systemctl enable [email protected]
sudo systemctl enable [email protected]
This method allows you to effectively manage and scale the Sidekiq workers according to the load each queue experiences. By tailoring each instance to a specific task, the server can handle more processes in parallel, improving the responsiveness and throughput of your Mastodon instance.
Scaling your Mastodon server in this manner not only optimizes the distribution of workload but also enhances the stability and reliability of the service, ensuring a smoother experience for users as your platform grows.