Sports Science Daily
Automated sports science intelligence engine — fetches 55+ sources (PubMed, expert blogs, wearable tech), filters noise, translates to Chinese, and syncs to...
技能说明
name: generate-daily-sports-update description: Automated sports science intelligence engine — fetches 55+ sources (PubMed, expert blogs, wearable tech), filters noise, translates to Chinese, and syncs to Feishu/Notion. metadata: openclaw.homepage: https://github.com/w2478328197-arch/sports-science-daily user-invocable: true requires.bins:
- python3 requires.env:
- FEISHU_APP_ID
- FEISHU_APP_SECRET
- FEISHU_RECEIVE_ID
Sports Science Daily — AI Agent Skill
An automated intelligence engine that aggregates 55+ global sports science sources into a single daily report, with smart filtering, auto-translation, and multi-platform sync.
What It Does
- Fetches peer-reviewed papers from 23 PubMed journals (BJSM, Sports Medicine, JSCR, MSSE, etc.)
- Crawls RSS feeds from 14 expert blogs/podcasts (Huberman, Attia, Nuckols, Dr. Mike, NSCA, etc.)
- Monitors 18 industry sources (The Quantified Scientist, DC Rainmaker, Oura, Garmin, ScienceDaily, ACSM, etc.)
- Filters noise using a 4-layer keyword system (positive/research/strong/negative keywords + trusted source whitelist)
- Translates all content to Chinese (or any target language) via Google Translate API
- Sorts each section by date (newest first)
- Deduplicates against local history to prevent repeat content
- Syncs the final report as a Feishu Cloud Document with notification card, and optionally to Notion
Prerequisites
- Python 3.8+ with
feedparserandrequestsinstalled (pip3 install -r requirements.txt) - Feishu App Credentials (for cloud document sync):
FEISHU_APP_ID: Feishu app IDFEISHU_APP_SECRET: Feishu app secretFEISHU_RECEIVE_ID: Target user/chat ID for message card
- (Optional) Notion Integration for Notion page sync:
NOTION_TOKENandNOTION_PAGE_ID
Instructions
-
Navigate to the project directory: Ensure you are in the
sports-science-dailyproject root. -
Run the update:
python3 main.py --days 2 -
Available options:
Flag Default Description --days N7 Lookback period in days --no-historyoff Force re-fetch all items (ignore dedup) --no-bloggersoff Skip blogger feeds, only industry + PubMed --lang LANGzh-CN Output language (en, es, ja, etc.) -
Output:
- Local Markdown file:
YYYY-MM-DD_运动科学日报.md - Feishu Cloud Document (auto-created with shareable link)
- Feishu message card sent to configured recipient
- Updated
processed_history.jsonfor deduplication
- Local Markdown file:
-
"No New Content" scenario: If output shows "🎉 没有发现新内容", increase
--daysor use--no-history.
Project Architecture
main.py # CLI entry point
src/
├── config.py # All sources, journals, blocklists
├── crawler.py # RSS + PubMed API fetching
├── formatter.py # Markdown generation + keyword filtering
├── translator.py # Google Translate API
├── history.py # Deduplication management
└── exporters/
├── feishu.py # Feishu cloud doc sync + message card
└── notion.py # Notion page sync
Security & Privacy
- External APIs: PubMed (eutils.ncbi.nlm.nih.gov), Google Translate, Feishu OpenAPI, Notion API, various RSS feeds
- Local files: Reads/writes
processed_history.jsonand.mdreports - No PII exposure: Only fetches public research data and news feeds
如何使用「Sports Science Daily」?
- 打开小龙虾AI(Web 或 iOS App)
- 点击上方「立即使用」按钮,或在对话框中输入任务描述
- 小龙虾AI 会自动匹配并调用「Sports Science Daily」技能完成任务
- 结果即时呈现,支持继续对话优化