Serverless computing promises to eliminate server management and reduce costs. After building several serverless applications, I’ve learned it’s not a silver bullet. Here’s an honest assessment of when serverless makes sense and when it doesn’t.

What is Serverless?

Serverless means:

  • No server management
  • Automatic scaling
  • Pay-per-execution pricing
  • Event-driven architecture

But servers still exist—you just don’t manage them.

Pros of Serverless

1. No Server Management

Benefit:

  • No OS updates
  • No security patches
  • No capacity planning
  • Focus on code, not infrastructure

Reality: You trade server management for:

  • Cold start management
  • Deployment complexity
  • Vendor lock-in
  • Debugging challenges

2. Automatic Scaling

Benefit:

// Handles 1 or 1,000,000 requests automatically
exports.handler = async (event) => {
    // Your code
    return { statusCode: 200, body: 'OK' };
};

Reality:

  • Cold starts cause latency spikes
  • Concurrent execution limits
  • Need to handle throttling
  • Cost can explode with scale

3. Cost Efficiency

Benefit:

  • Pay only for execution time
  • No idle server costs
  • Great for sporadic workloads

Example:

Traditional: EC2 t2.small = $15/month (24/7)
Serverless: 1M requests × 200ms × 512MB = ~$3/month

Reality:

  • Can be expensive for high-traffic, consistent workloads
  • Hidden costs: API Gateway, data transfer
  • Cold starts waste money

4. Faster Development

Benefit:

// Deploy in seconds
serverless deploy

// No infrastructure setup
// No server configuration

Reality:

  • Local development is harder
  • Testing requires more setup
  • Debugging is more complex

Cons of Serverless

1. Cold Starts

Problem:

// First request: 2-5 seconds (cold start)
// Subsequent: 50-200ms (warm)

Impact:

  • Poor user experience
  • Unpredictable latency
  • Need warming strategies

Solutions:

// Keep functions warm
setInterval(() => {
    fetch('https://api.example.com/keep-warm');
}, 5 * 60 * 1000); // Every 5 minutes

2. Vendor Lock-In

Problem:

  • AWS Lambda code doesn’t run on Azure Functions
  • API Gateway is AWS-specific
  • Hard to migrate

Mitigation:

// Abstract vendor-specific code
class ServerlessAdapter {
    async invoke(functionName, payload) {
        // AWS-specific
        return await lambda.invoke({
            FunctionName: functionName,
            Payload: JSON.stringify(payload)
        }).promise();
    }
}

// Use adapter everywhere
const adapter = new ServerlessAdapter();
await adapter.invoke('my-function', data);

3. Debugging Challenges

Problem:

  • No SSH access
  • Limited logging
  • Hard to reproduce issues locally

Solutions:

// Comprehensive logging
exports.handler = async (event, context) => {
    const logger = {
        info: (msg, data) => console.log(JSON.stringify({
            level: 'info',
            message: msg,
            data,
            requestId: context.requestId
        })),
        error: (msg, error) => console.error(JSON.stringify({
            level: 'error',
            message: msg,
            error: error.message,
            stack: error.stack,
            requestId: context.requestId
        }))
    };
    
    logger.info('Function started', { event });
    
    try {
        // Your logic
        return { statusCode: 200 };
    } catch (error) {
        logger.error('Function failed', error);
        throw error;
    }
};

4. Execution Time Limits

Problem:

  • AWS Lambda: 15 minutes max
  • Not suitable for long-running tasks

Workaround:

// Break into smaller functions
exports.processBatch = async (event) => {
    const batch = event.items.slice(0, 100);
    
    // Process batch
    await processItems(batch);
    
    // If more items, invoke next function
    if (event.items.length > 100) {
        await lambda.invoke({
            FunctionName: 'processBatch',
            Payload: JSON.stringify({
                items: event.items.slice(100)
            })
        }).promise();
    }
};

5. Cost at Scale

Problem: High-traffic applications can be expensive:

1M requests/day
× 200ms average duration
× 512MB memory
= ~$30/month (Lambda)
+ $3.50/month (API Gateway)
+ Data transfer costs
= Can exceed EC2 costs

When Serverless is Expensive:

  • High, consistent traffic
  • Long-running functions
  • Large memory requirements
  • High data transfer

When to Use Serverless

Good Fit ✅

  1. Event-driven workloads
    • File processing
    • Webhooks
    • Scheduled tasks
  2. Sporadic traffic
    • Low baseline, occasional spikes
    • Batch processing
  3. Microservices
    • Small, focused functions
    • Independent deployment
  4. API backends
    • REST APIs
    • GraphQL APIs
    • Low latency acceptable

Not a Good Fit ❌

  1. Long-running processes
    • Video processing
    • Data ETL
    • Batch jobs > 15 minutes
  2. High, consistent traffic
    • High-traffic APIs
    • Real-time applications
    • WebSocket servers
  3. Stateful applications
    • WebSocket connections
    • Long-lived connections
    • In-memory state
  4. Tight latency requirements
    • Real-time gaming
    • Trading systems
    • Cold starts unacceptable

Hybrid Approach

Use serverless where it makes sense:

// Architecture
┌─────────────┐
   API GW      Lambda (API endpoints)
└─────────────┘
      
      ├──→ Lambda (File processing)
      ├──→ Lambda (Scheduled tasks)
      └──→ EC2 (WebSocket server)
           └──→ Lambda (Background jobs)

Example:

  • API endpoints: Lambda ✅
  • File processing: Lambda ✅
  • WebSocket: EC2 ✅
  • Scheduled tasks: Lambda ✅

Cost Comparison

Low Traffic (100K requests/month)

Serverless:

Lambda: $0.20
API Gateway: $3.50
Total: ~$4/month

EC2:

t2.micro: $8.50/month
Total: ~$9/month

Winner: Serverless

High Traffic (100M requests/month)

Serverless:

Lambda: $20
API Gateway: $3,500
Data transfer: $100
Total: ~$3,620/month

EC2:

3x t2.large: $150/month
Load balancer: $20/month
Total: ~$170/month

Winner: EC2

Best Practices

  1. Keep functions small - Single responsibility
  2. Minimize cold starts - Keep packages small
  3. Use connection pooling - Reuse database connections
  4. Monitor costs - Set up billing alerts
  5. Handle errors gracefully - Retry logic
  6. Use environment variables - Configuration
  7. Version functions - Use aliases
  8. Test locally - Use SAM or Serverless Framework

Migration Strategy

Start Small

// Phase 1: Move background jobs
// Old: Cron job on EC2
// New: Lambda + CloudWatch Events

// Phase 2: Move API endpoints
// Old: Express.js on EC2
// New: Lambda + API Gateway

// Phase 3: Move file processing
// Old: Worker on EC2
// New: Lambda + S3 trigger

Measure Everything

// Track metrics
- Execution time
- Cold start frequency
- Error rate
- Cost per request
- User experience impact

Conclusion

Serverless is powerful but not universal:

Use serverless when:

  • Event-driven workloads
  • Sporadic traffic
  • Want to reduce ops overhead
  • Cost-effective for your scale

Avoid serverless when:

  • Long-running processes
  • High, consistent traffic
  • Tight latency requirements
  • Stateful applications

The key is understanding your workload and choosing the right tool. Serverless isn’t always cheaper or better—it’s different. Use it where it makes sense, combine with traditional infrastructure where it doesn’t.


Serverless analysis from December 2017, based on real production experience with AWS Lambda.