Effective logging is critical for monitoring and troubleshooting applications deployed on AWS. However, with high log volumes, identifying critical issues in CloudWatch can be challenging. Integrating logs with team communication tools like Slack and storing them for audits can streamline incident response and compliance. This article outlines how to build a custom logging system using AWS Lambda to filter logs with specific markers, send real-time Slack notifications, and store logs in DynamoDB. This approach, born from a need to quickly detect and act on critical errors, provides a scalable solution for AWS-based applications.
The Need for Custom Logging
AWS CloudWatch efficiently collects logs, but its broad scope can obscure critical events. Teams often rely on Slack for collaboration, making it an ideal destination for urgent alerts. A custom logging system addresses this by:
- Filtering logs based on markers like
[CRITICAL]
or[ALERT]
. - Sending immediate notifications to a designated Slack channel.
- Storing filtered logs in a database for auditing and analysis.
- Operating as a separate AWS component for modularity.
This system was developed after a critical database error went unnoticed in CloudWatch, prompting the need for faster detection and team notification.
System Architecture
The architecture leverages AWS serverless services and Slack’s webhook API:
- CloudWatch Logs: Collects raw logs from applications.
- AWS Lambda: Processes logs, filters by markers, sends Slack notifications, and writes to the database.
- Amazon DynamoDB: Stores filtered logs for persistence (RDS is an alternative for relational needs).
- Slack Webhook: Delivers formatted alerts to a specified channel.
Workflow:
- Applications send logs to CloudWatch with markers (e.g.,
[CRITICAL]
). - CloudWatch triggers a Lambda function via a subscription filter.
- Lambda filters logs, sends notifications to Slack, and stores data in DynamoDB.
The architecture can be visualised as:
[Application] → [CloudWatch Logs] → [Lambda] → [Slack]
↓
[DynamoDB]
Implementation Steps
The following steps assume an AWS account, a Slack workspace, and familiarity with AWS services. The Lambda function uses Python, though it can be adapted to other runtimes.
Step 1: Configure CloudWatch Logs
Ensure applications send logs to CloudWatch using:
- AWS SDKs (e.g.,
boto3
for Python). - Logging libraries like
python-logging
orwinston
with CloudWatch integration. - CloudWatch Agent for EC2 system logs.
Create a log group (e.g., /aws/app/myapp
) and include markers in logs, such as:
2025-06-29T10:00:00Z [INFO] User login successful
2025-06-29T10:01:00Z [CRITICAL] Database connection failed
Step 2: Create the Lambda Function
Set up a Lambda function in the AWS Console:
- Runtime: Python 3.9 or later.
- IAM Role: Grant permissions for CloudWatch Logs, DynamoDB, and Secrets Manager.
- Timeout: 30 seconds.
- Memory: 128 MB.
Below is the Lambda code:
import json
import base64
import zlib
import boto3
import requests
from datetime import datetime
def lambda_handler(event, context):
# Decode CloudWatch log data
log_data = base64.b64decode(event['awslogs']['data'])
log_json = json.loads(zlib.decompress(log_data, 16+zlib.MAX_WBITS))
# Initialize clients
slack_webhook = "https://hooks.slack.com/services/xxx/yyy/zzz" # Use Secrets Manager
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('AppLogs')
# Process log events
for event in log_json['logEvents']:
message = event['message']
if '[CRITICAL]' in message or '[ALERT]' in message:
# Send Slack notification
marker = 'CRITICAL' if '[CRITICAL]' in message else 'ALERT'
slack_payload = {
"text": f"{marker} Log: {message}",
"channel": "#app-alerts",
"username": "AWS Logger",
"icon_emoji": ":aws:"
}
requests.post(slack_webhook, json=slack_payload)
# Store in DynamoDB
table.put_item(
Item={
'log_id': str(event['id']),
'timestamp': datetime.utcfromtimestamp(event['timestamp']/1000).isoformat(),
'marker': marker,
'message': message,
'app_name': log_json['logStream']
}
)
return {
'statusCode': 200,
'body': json.dumps('Logs processed')
}
Details:
- The function decodes compressed CloudWatch log data.
- It filters logs with
[CRITICAL]
or[ALERT]
markers. - Notifications are sent to Slack with the marker and message.
- Logs are stored in DynamoDB with metadata like
log_id
andtimestamp
.
Step 3: Set Up Slack Webhook
To enable Slack notifications:
- Create a Slack app at
api.slack.com/apps
and enable incoming webhooks. - Generate a webhook URL for the target channel (e.g.,
#app-alerts
). - Store the URL in AWS Secrets Manager for security.
Test the webhook:
curl -X POST -H 'Content-type: application/json'
--data '{"text":"Test alert from AWS"}'
https://hooks.slack.com/services/xxx/yyy/zzz
Step 4: Configure DynamoDB
Create a DynamoDB table named AppLogs
with:
- Partition Key:
log_id
(String). - Attributes:
timestamp
,marker
,message
,app_name
.
Use on-demand capacity for scalability. Alternatively, use Amazon RDS for relational storage.
Step 5: Connect Components
Add a CloudWatch Logs subscription filter:
- Navigate to the log group in the CloudWatch console.
- Create a filter with a pattern like
[CRITICAL]
or[ALERT]
. - Set the Lambda function as the destination.
- Test by sending logs with markers and verifying Slack notifications and DynamoDB entries.
Security Considerations
To secure the system:
- Webhook Storage: Store the Slack webhook URL in AWS Secrets Manager. Update Lambda to retrieve it:
secrets_client = boto3.client('secretsmanager')
slack_webhook = secrets_client.get_secret_value(SecretId='SlackWebhook')['SecretString']
- IAM Permissions: Assign least-privilege permissions for CloudWatch Logs (
logs:CreateLogStream
,logs:PutLogEvents
), DynamoDB (dynamodb:PutItem
), and Secrets Manager (secretsmanager:GetSecretValue
). - Log Sanitization: Remove sensitive data from logs before sending to Slack or storing.
- Encryption: Enable encryption for DynamoDB and use HTTPS for Slack.
Enhancements
The system may be extended by:
- Adding markers (e.g.,
[INFO]
) for different channels. - Integrating with PagerDuty or Jira for incident tracking.
- Using Amazon SNS for multi-destination alerts.
- Visualising logs with CloudWatch Dashboards or Grafana.
Conclusion
This custom logging system, built with AWS Lambda, CloudWatch, DynamoDB, and Slack, enhances monitoring by filtering critical logs, notifying teams in real-time, and storing data for audits. It’s a practical, scalable solution for AWS applications.