×
全部 > AI技术 > AI编程 > AI代码生成 > 生成Nginx日志排查脚本

生成Nginx日志排查脚本

claude3.5 sonnet 20241022(Anthropic)

## Troubleshooting Strategy Overview This script is designed to analyze Nginx access logs for specific issues, particularly focusing on HTTP status codes 500 and timeout errors within a defined time range. It will extract relevant fields such as IP addresses, URLs, and status codes, and present the results in a tabular format. ## Log Troubleshooting Script Source Code ```bash #!/bin/bash # Define log file path LOG_FILE="/var/log/nginx/access.log" # Define time range START_TIME="2025-04-01 00:00:00" END_TIME="2025-04-02 23:59:59" # Define keywords KEYWORDS=("500" "timeout") # Function to convert time to epoch convert_to_epoch() { date -d "$1" +%s } # Convert start and end time to epoch START_EPOCH=$(convert_to_epoch "$START_TIME") END_EPOCH=$(convert_to_epoch "$END_TIME") # Grep for keywords and filter by time awk -v start="$START_EPOCH" -v end="$END_EPOCH" -v keywords="${KEYWORDS[*]}" ' BEGIN { split(keywords, kw, " "); for (i in kw) { keyword[kw[i]] = 1; } } { # Extract timestamp from log line (assuming the log format) log_time = substr($0, 1, 20); # Adjust based on actual log format log_epoch = mktime(gensub(/[-:]/, " ", "g", log_time)); # Check if log entry is within the time range if (log_epoch >= start && log_epoch <= end) { for (i in kw) { if (index($0, kw[i]) > 0) { # Extract fields (assuming IP is the first, URL is the 7th, and status is the 9th) ip = $1; url = $7; status = $9; print ip, url, status; } } } }' "$LOG_FILE" | column -t ``` ## Usage Instructions and Notes 1. **Log File Path**: Ensure that the `LOG_FILE` variable points to the correct location of your Nginx access log file. 2. **Time Range**: The script is configured to analyze logs between `2025-04-01 00:00:00` and `2025-04-02 23:59:59`. Adjust these values as needed. 3. **Keywords**: The script searches for the keywords "500" and "timeout". You can modify the `KEYWORDS` array to include other terms as necessary. 4. **Execution**: Run the script in a shell environment with appropriate permissions to read the log file. 5. **Output**: The results will be displayed in a formatted table showing the IP address, URL, and status code for matching entries. 6. **Safety**: This script is designed to be read-only and will not modify any log data.

DeepSeek V3(Deepseek)

# Nginx Access日志排查方案 ## Troubleshooting Strategy Overview 本方案针对Nginx Access日志中500状态码和timeout关键词进行排查,筛选2025-04-01至2025-04-02时间范围内的记录,并以表格形式输出IP、URL和状态码字段。 ## Log Troubleshooting Script Source Code ```bash #!/bin/bash # 定义日志文件路径(请根据实际情况修改) LOG_FILE="/var/log/nginx/access.log" # 定义时间范围(转换为Nginx日志时间格式) START_TIME="01/Apr/2025:00:00:00" END_TIME="02/Apr/2025:23:59:59" # 检查日志文件是否存在 if [ ! -f "$LOG_FILE" ]; then echo "错误: 日志文件 $LOG_FILE 不存在" >&2 exit 1 fi # 主处理逻辑 echo "IP地址 | 请求URL | 状态码" echo "------|---------|------" awk -v start="$START_TIME" -v end="$END_TIME" ' { # 提取日志时间戳(Nginx默认格式:[01/Apr/2025:00:00:00 +0800]) log_time = substr($4, 2) # 时间范围筛选 if (log_time >= start && log_time <= end) { # 关键词筛选(500状态码或timeout) if ($9 == 500 || $0 ~ /timeout/) { # 提取IP($1)、URL($7)和状态码($9) printf "%s | %s | %s\n", $1, $7, $9 } } }' "$LOG_FILE" | sort | uniq -c | sort -nr | awk '{printf "%s | %s | %s | 出现次数: %s\n", $2, $3, $4, $1}' ``` ## Usage Instructions and Notes ### 使用说明 1. 将脚本保存为`nginx_log_check.sh` 2. 修改`LOG_FILE`变量为实际的Nginx访问日志路径 3. 给脚本添加执行权限:`chmod +x nginx_log_check.sh` 4. 执行脚本:`./nginx_log_check.sh` ### 注意事项 1. 脚本默认处理标准Nginx访问日志格式: `$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"` 2. 如果您的Nginx使用自定义日志格式,需要调整awk脚本中的字段位置($1, $4, $7, $9等) 3. 时间范围筛选基于Nginx日志中的本地时间,请确保时区设置正确 4. 脚本默认输出按出现次数降序排列的表格,包含: - IP地址 - 请求URL - HTTP状态码 - 出现次数统计 5. 如需处理压缩日志(如.gz文件),可在脚本前添加`zcat`或`zgrep`命令 6. 脚本为只读操作,不会修改原始日志文件 7. 对于大型日志文件,处理可能需要较长时间,建议在非高峰期执行

gpt-4o(OpenAI)

```markdown ## 排查方案概述(Troubleshooting Strategy Overview) 我们将针对Nginx访问日志进行排查,旨在找出包含关键词“500”和“timeout”的日志条目。这些关键词通常指示服务器端错误或请求超时,从而有助于快速定位可能的故障原因。此外,还需要在指定的时间范围内筛选(2025-04-01至2025-04-02),并提取IP、URL和状态码字段,最终以表格形式输出结果。 ## 日志排查脚本源码(Log Troubleshooting Script Source Code) ```bash #!/bin/bash # 输入日志文件路径 LOG_FILE="/path/to/nginx/access.log" # 定义时间范围 START_TIME="2025-04-01 00:00:00" END_TIME="2025-04-02 23:59:59" # 转换时间格式为秒数 START_TIMESTAMP=$(date -d "$START_TIME" +"%s") END_TIMESTAMP=$(date -d "$END_TIME" +"%s") # 解析日志文件 awk -v start="$START_TIMESTAMP" -v end="$END_TIMESTAMP" ' BEGIN { split(start, start_parts, " ") split(end, end_parts, " ") } { # 提取日志中的时间戳并转换为秒数 match($0, /\[([0-9a-zA-Z:\/]*)\]/, time_match) log_time_str = time_match[1] log_timestamp = mktime(gensub(/[:-]/, " ", "g", log_time_str)) # 检查时间范围并筛选关键词 if (log_timestamp >= start && log_timestamp <= end && ($0 ~ /500/ || $0 ~ /timeout/)) { # 提取需要的字段:IP、URL、Status ip = $1 url = $7 status = $9 # 输出结果为表格格式 printf "%-15s %-50s %-5s\n", ip, url, status } } ' $LOG_FILE ``` ## 使用说明与注意事项(Usage Instructions and Notes) 1. **日志文件路径**:请在脚本内的`LOG_FILE`变量中指定Nginx访问日志的绝对路径。 2. **时间范围设置**:脚本使用`awk`进行时间戳提取和比较,确保时间范围设置正确。开始时间和结束时间需遵循`YYYY-MM-DD HH:mm:ss`格式。 3. **运行脚本**:确保脚本文件具有执行权限。可以通过命令`chmod +x script.sh`添加执行权限,并使用`./script.sh`运行脚本。 4. **注意事项**: - 输入的日期格式必须正确,否则时间转换可能失败。 - 此脚本只对Nginx访问日志格式有效。其他格式需要适配。 - 确保日志文件的可读性,避免权限问题导致读取失败。 5. **安全性**:脚本只进行日志文件的读取操作,不会修改或损坏文件数据。 通过此脚本,您可以有效定位在特定时间范围内与“500”和“timeout”相关的日志记录,并以表格形式展现IP、URL和状态码信息,便于进一步分析。若需要改为JSON格式输出,可对`awk`中的输出部分进行调整。

通义千问2.5-Coder-32B(阿里云百炼)

## Troubleshooting Strategy Overview The provided script is designed to analyze Nginx access logs for specific keywords ("500" and "timeout") within a given time range ("2025-04-01 00:00:00" to "2025-04-02 23:59:59"). It extracts the IP address, URL, and status code from the logs and formats the output as a table. ### Log Troubleshooting Script Source Code ```bash #!/bin/bash # Configuration LOG_FILE="/var/log/nginx/access.log" KEYWORDS=("500" "timeout") START_TIME="2025-04-01 00:00:00" END_TIME="2025-04-02 23:59:59" OUTPUT_FORMAT="table" FIELD_EXTRACT=("IP" "URL" "Status") # Convert human-readable dates to epoch for comparison START_EPOCH=$(date -d "$START_TIME" +%s) END_EPOCH=$(date -d "$END_TIME" +%s) # Function to extract fields from log line extract_fields() { local line="$1" local ip=$(echo "$line" | awk '{print $1}') local url=$(echo "$line" | awk '{print $7}') local status=$(echo "$line" | awk '{print $9}') echo "$ip $url $status" } # Main processing awk -v start="$START_EPOCH" -v end="$END_EPOCH" ' BEGIN { FS = "[[:space:]]+|\"[[:space:]]*|[[:space:]]*\"" } { timestamp = gensub(/[\[\]]/, "", "g", $4 " " $5) epoch = mktime(gensub(/:/, " ", "g", timestamp)) if (epoch >= start && epoch <= end) { for (i in keywords) { if ($0 ~ keywords[i]) { print $0 } } } } ' keywords="${KEYWORDS[*]}" "$LOG_FILE" | while read -r log_line; do extract_fields "$log_line" done | awk 'BEGIN {print "IP\tURL\tStatus"} {print $1"\t"$2"\t"$3}' | column -t ``` ### Usage Instructions and Notes 1. **Save the Script**: Save the above script to a file, e.g., `nginx_log_analysis.sh`. 2. **Make Executable**: Run `chmod +x nginx_log_analysis.sh` to make the script executable. 3. **Run the Script**: Execute the script using `./nginx_log_analysis.sh`. #### Notes: - **Log File Path**: Ensure the `LOG_FILE` variable points to the correct Nginx access log file path. - **Keywords**: The script searches for lines containing any of the specified keywords ("500" and "timeout"). - **Time Range**: The script filters log entries between the specified start and end times. - **Field Extraction**: The script extracts the IP address, URL, and status code from each matching log entry. - **Output Format**: The output is formatted as a tab-separated table with columns for IP, URL, and Status. - **Dependencies**: The script uses standard Unix utilities (`awk`, `date`, `column`) and should work on most Linux distributions without additional dependencies. This script is designed to be read-only and does not modify any log files or system settings.

模型名称 模型提供商 响应总时长(s) 生成 token 数 首 token 延迟(s) 生成速率(tokens/s)
14.34
响应最快
0
2.66
0
36.06
701
3.02
19.44
16.91
736
2.56
43.53
速度最快
46.55
743
内容最多
0.45
延迟最小
15.96
AI文本生成
38家服务商提供服务
AI深度推理
11家服务商提供服务
AI代码生成
11家服务商提供服务
AI数学模型
10家服务商提供服务