Memory Issues
Memory (RAM) issues can affect both builds and running applications. Understanding the difference between temporary build-time spikes and persistent runtime memory problems helps you choose the right solution.
Build-Time Memory Spikes
RAM spikes during builds are normal behavior, especially for JavaScript and TypeScript projects.
Why it happens:
npm install/yarn installloads the entire dependency tree into memory for resolution- Bundlers like Webpack, Vite, and esbuild consume significant RAM during compilation
- Python pip may use extra memory when building wheels for native extensions
What to expect:
- A Node.js project build may temporarily use 1-2 GB of RAM
- A Python project with native dependencies may spike to 500 MB - 1 GB
- Memory returns to normal after the build completes
If builds consistently fail due to memory, consider upgrading to a server with at least 2 GB RAM. You can also stagger deployments so only one application builds at a time.
See Build Failures for more on build-related issues.
OOM Killer Stopping Containers
When a process exceeds available memory, the Linux kernel's Out-Of-Memory (OOM) killer terminates it.
Symptoms:
- Containers restart unexpectedly
- Application becomes unresponsive, then recovers after a restart
- No error in application logs (the process is killed externally)
How to confirm:
ssh app@<server-ip>
# Check kernel OOM events
sudo dmesg | grep -i "oom\|out of memory" | tail -20
# Check Docker events for OOM kills
docker events --filter event=oom --since 24h
Solutions:
- Reduce memory per process: Lower the number of workers or threads
- Gunicorn:
--workers 2 --threads 2instead of--workers 4 - Celery:
--concurrency 2instead of the default - Node.js:
NODE_OPTIONS=--max-old-space-size=256
- Gunicorn:
- Use memory-efficient alternatives:
uvicornwith fewer workers,geventworker class for Gunicorn - Set resource limits: In your
appliku.yml, define memory limits per service to prevent one service from consuming all available RAM:services:
web:
command: gunicorn project.wsgi
resources_limits_memory: 512M
worker:
command: celery -A project worker -l info
resources_limits_memory: 256M
Optimizing Application Memory
General Strategies
- Profile your application: Use language-specific memory profilers to identify leaks
- Python:
tracemalloc,memory_profiler - Node.js:
--inspectflag + Chrome DevTools memory profiler - Ruby:
memory_profilergem
- Python:
- Reduce worker count: Fewer processes = less memory. Find the right balance between concurrency and memory usage
- Lazy load resources: Load large files, ML models, or datasets on demand rather than at startup
- Use streaming: Process large datasets in chunks instead of loading everything into memory
Language-Specific Tips
Python/Django:
- Set
CONN_MAX_AGEto reuse database connections instead of opening new ones - Use
iterator()on large QuerySets to avoid loading all records into memory - Avoid storing large objects in module-level variables
Node.js:
- Increase V8 garbage collection frequency with
--expose-gc - Watch for closure-related memory leaks in event handlers
- Use streams for file processing
Monitoring RAM Usage
Monitor your server's memory from the Appliku dashboard:
- Go to Servers and click on your server
- The overview page shows current RAM utilization
- Track memory trends over time to spot gradual increases (memory leaks)
From the command line:
# Current memory usage
free -h
# Per-container memory usage
docker stats --no-stream
When to Upgrade Your Server
Consider upgrading when:
- Builds consistently fail due to OOM
- Running containers regularly get OOM-killed
- Server RAM is consistently above 80% utilization
- You need to add more application processes but are memory-constrained
A good rule of thumb: start with 2 GB RAM for a single application and add 512 MB - 1 GB for each additional application on the same server.