$devtoolkit.sh/examples/csv/log-data

Format a CSV Log File

Structured logs exported as CSV from logging platforms like Datadog, CloudWatch, or custom log aggregators are easier to analyze than raw text. This example shows an application log with timestamp, level, service, request ID, and message fields. The CSV viewer makes it easy to filter by level or service and spot error clusters. Use remove-duplicates and sort tools to clean up log data before analysis.

Example
timestamp,level,service,request_id,message
2024-01-15T10:00:01Z,INFO,api-gateway,req_001,Request received: GET /api/users
2024-01-15T10:00:01Z,INFO,user-service,req_001,Fetching user list from database
2024-01-15T10:00:02Z,WARN,user-service,req_001,Slow query: 850ms for SELECT users
2024-01-15T10:00:02Z,INFO,api-gateway,req_001,Response sent: 200 OK in 923ms
2024-01-15T10:00:05Z,ERROR,payment-service,req_002,Connection timeout: stripe.com
2024-01-15T10:00:05Z,ERROR,api-gateway,req_002,Response sent: 503 Service Unavailable
[ open in CSV File Viewer → ]

FAQ

What is structured logging?
Structured logging records log entries as machine-readable key-value pairs (JSON or CSV) rather than plain text strings. This makes logs searchable, filterable, and easy to ingest into monitoring platforms.
What are the standard log levels?
Common levels in order of severity are TRACE, DEBUG, INFO, WARN, ERROR, and FATAL. Set production log level to INFO to capture normal operations and errors without the noise of debug messages.
How do I correlate logs across services?
Include a request_id or trace_id field in every log entry and propagate it across service calls. This lets you filter all log entries for a single end-to-end request.

Related Examples

/examples/csv/log-datav1.0.0