Overall, I am always looking for ways to automate the repetitive tasks and busywork of my day-to-day work. In this post, I describe my attempt at building a note-taking system that serves as a knowledge base for my work and simplifies providing status updates for the many standup meetings I (have to) attend.
I’ll describe how I changed the way I capture my daily work logs - moving away from manually writing notes to capturing them in a structured way that is accessible to modern AI tools. Accessibility to automated LLM-processing is one of the key enabling features of my new note-taking system that unlocks new levels of productivity because it allows me to provide context and raw data rather than concern myself with the actual writing of notes from scratch.
One could argue that completely revamping / restructuring one’s note-taking approach is overkill for just providing status updates. However, I found it to have completely changed the way I track my daily work, how I access my work log archive, and how I extract value from these notes. Overall, this new system has provided a lot of value besides automating status updates in the process.
To provide some context about the origins of this project and this post:
- Previously I was taking notes in the macOS Notes app. While very convenient for capturing notes with minimal friction across various devices, this setup is not easily accessible to external tools since exporting notes in batch is not possible natively.
- A friend of mine recently introduced me to Obsidian, a note-taking app that uses plain Markdown files, has a powerful plugin system, and is very flexible in how the notes can be used / exported / accessed. I immediately enjoyed Obsidian (thanks Akash for introducing me to this amazing tool), so I was looking for a setup that uses plain Markdown files as the underlying format.
- I wanted to minimize friction in capturing my daily work, not just my work logs, but also meeting transcripts, Slack conversation summaries, tutorials, ideas, etc. Just as a sidenote, I am a big fan of note-taking and take pride in writing high-quality and thorough notes - not just for the sake of taking notes, but as a way to archive my daily tasks and be able to refer back to them later.
- Most of all, I wanted to build a system that could generate status updates for the countless standup meetings in a fully automated fashion. I wanted an assistant that could summarize my notes into bulleted lists that I could just copy-paste into the standup meeting (the next step for my automation journey is to have a voice-cloned model read them out for me).
The project description below is the (mostly) LLM-generated README for the note-taking system. In a sense, this summary is the “product” of this LLM-powered note-taking system (with some light touching up by me).
One section I want to highlight in this post is LLM-Powered Workflows, which details how integrating large language models transforms raw work inputs—like Slack threads or meeting transcripts—into structured, actionable notes with minimal manual effort. If you read only one section, that’s the one that can provide the most value in your daily work.
The Problem: Engineering Work Is Chaotic
As an engineer, my workday involves constant context switching:
- Morning standup → deep work on feature implementation → code review → meeting about infrastructure → Slack fire drill → back to debugging
By Friday, I can barely remember what I worked on Monday. Status reports become archaeological expeditions through Slack, email, and half-finished docs. Meeting notes get lost. Project context evaporates.
I needed a system that could:
- Capture everything in one place — daily logs, meeting notes, project status, reference links
- Stay out of my way — minimal friction during the day
- Generate outputs — weekly (or daily) status updates, project summaries, task lists, etc.
- Be LLM-accessible — let Cursor/Claude read my notes and help me write documentation
The result is an Obsidian vault organized using the PARA (Project, Area, Resource, Archive) methodology, integrated with Cursor for LLM-powered processing.
The Folder Structure
work-notes/
├── 10-Daily/ # Daily work logs
│ ├── 2025/
│ │ └── 12/
│ │ ├── 2025-12-11.md
│ │ └── ...
│ └── 2026/
│ └── 01/
│ ├── 2026-01-05.md
│ ├── 2026-01-06.md
│ └── ...
├── 20-Projects/ # Active project overviews
│ ├── ML Pipeline Refactor.md
│ ├── API Gateway Migration.md
│ └── Projects Index.md
├── 30-Areas/ # Ongoing responsibilities
│ └── tasks.md # Auto-aggregated task list
├── 40-Resources/ # Reference material
│ ├── data/ # CSV files, datasets
│ ├── design_proposals/ # Feature specs
│ ├── docs/ # Reference documentation
│ ├── images/ # Screenshots, diagrams
│ ├── implementation_plans/ # Detailed plans
│ ├── meeting-transcripts/ # Raw transcripts
│ ├── scripts/ # Python automation
│ ├── tools/ # CLI cheat sheets
│ └── workflows/ # Process documentation
├── 80-Templates/ # Note templates
│ ├── daily.md
│ ├── meeting.md
│ └── project-overview.md
├── Personal/ # Non-work notes
└── Obsidian Setup.md # This guide
This follows the PARA methodology by Tiago Forte:
| Folder | Purpose | Time-bound? |
|---|---|---|
| Projects | Active efforts with a deadline | ✅ Yes |
| Areas | Ongoing responsibilities | ❌ No |
| Resources | Reference material for future use | ❌ No |
| Archive | Completed/inactive items | ❌ No |
Key distinction:
- Project: “Migrate API Gateway to new infrastructure” → has an end date, can be completed
- Area: “Maintain CI/CD pipelines” → ongoing, never truly “done”
The Daily Note: My Primary Interface
Every workday starts with a new daily note. This is where everything gets captured.
Template (80-Templates/daily.md)
---
date: {{date}}
type: daily
area:
tags:
- daily
- {{date:YYYY}}/{{date:MM}}
projects_touched: []
---
# {{date:dddd, MMMM D, YYYY}}
## 🎯 Focus Areas
-
## 📋 Tasks
### High Priority [p:: 1]
- [ ] Task [p:: 1]
### Medium Priority [p:: 2]
- [ ] Task [p:: 2]
### Upcoming
- [ ] Task
---
## 🔄 Work Log
---
## 🚧 Blockers
-
---
## 💡 Notes & Ideas
-
---
## 📎 References
### Slack Threads
-
### Google Docs
-
### Code & PRs
-
### Related Notes
-
---
## Related
- Previous: [[{{date:YYYY-MM-DD|-1d}}]]What a Real Daily Note Looks Like
Here’s an example of a typical work day:
---
date: 2026-01-06
type: daily
area: data-platform
tags:
- daily
- 2026/01
projects_touched:
- "[[ML Pipeline Refactor]]"
- "[[API Gateway Migration]]"
---
# Monday, January 6, 2026
## 🎯 Focus Areas
**Summary**: Created comprehensive data transformation workflow documentation
and automation script.
### Open Follow-ups
- [ ] Test the transformation script with production data sample [p:: 1]
- [ ] Validate output format against downstream consumers [p:: 2]
- [ ] Follow up on infra team's capacity planning meeting [p:: 1]
## 📋 Tasks Completed
- [x] Researched Spark → Flink migration patterns
- [x] Documented the new streaming pipeline architecture
- [x] Created Python automation script for schema validation
## 🔄 Work Log
### Data Pipeline Workflow Documentation
Created a comprehensive workflow document capturing the migration process:
**Key Components:**
1. **Input Format**: Parquet files from data lake
2. **Transformation Layer**: Flink streaming jobs
3. **Output Format**: Delta tables for ML training
4. **Monitoring**: Prometheus metrics + Grafana dashboards
### Artifacts Created
| Artifact | Location | Description |
|----------|----------|-------------|
| Workflow Doc | `40-Resources/workflows/spark-to-flink-migration.md` | Process documentation |
| Python Script | `40-Resources/scripts/schema_validator.py` | Automation script |
## 📎 References
### Slack Threads
- [Pipeline Architecture Discussion](https://slack.com/...) — sync thread
### Google Docs
- [Migration Playbook](https://docs.google.com/...) — Original playbook
- [Data Platform Strategy](https://docs.google.com/...) — Workflows
### Related Notes
- [[spark-to-flink-migration]] — Comprehensive workflow documentationThe daily note becomes a chronological log of everything I touched, with links to all relevant resources. While these daily note markdown files can be viewed directly in Cursor, I am also using Obsidian as my UI as it provides a few powerful features to help me manage my notes (such as the plugin system).
Project Notes: Living Documentation
Each project gets a dedicated note that evolves over time.
Template (80-Templates/project-overview.md)
---
type: project
status: active # active | on-hold | completed | cancelled
priority: high # high | medium | low
owner:
stakeholders: []
start_date: {{date}}
target_date:
tags:
- project
- status/active
jira_epic:
confluence_page:
---
# {{title}}
## Summary
<!-- 2-3 sentence project summary for status reports -->
## Objectives
1.
## Current Status
<!-- Updated weekly - this section is scraped for status reports -->
**Last Updated:** {{date}}
**Health:** 🟢 On Track | 🟡 At Risk | 🔴 Blocked
### This Week
-
### Next Week
-
### Risks & Blockers
-
## Team
-
## Links
- [[Projects Index|All Projects]]Real Example: ML Pipeline Refactor
# ML Pipeline Refactor
## Summary
Modernizing the ML training pipeline from batch Spark jobs to real-time
Flink streaming, reducing data freshness from 24 hours to under 5 minutes
while improving cost efficiency.
## Key Components
- **Data Ingestion** — Kafka → Flink streaming
- **Feature Store** — Real-time feature computation
- **Model Training** — Incremental learning support
- **Monitoring** — End-to-end latency tracking
## Current Status
**Health:** 🟡 At Risk
### Risks & Blockers
- Feature store migration blocked on schema freeze
- Need capacity planning approval for new Flink cluster
## Key Workstreams
### Streaming Architecture
- Replace batch Spark jobs with Flink stateful processing
- Target: <5 min data freshness (currently 24h)
### Feature Store Migration
- Move from offline feature store to hybrid online/offline
- Support both batch training and real-time inference
## Team
- Alice Chen — Tech Lead
- Bob Martinez — Data Engineer
- Carol Singh — ML Engineer
## Links
- [[Projects Index|All Projects]]
- [[API Gateway Migration]] — Dependent project
- [[Pipeline Architecture Docs]] — Technical reference
- [[2025-12-11]] — Initial project setupThe project note serves as a living document — I update it weekly with current status, and it links to all related daily notes, docs, and resources.
Auto-Aggregated Task Lists
One of the most powerful features is the 30-Areas/tasks.md file, which uses Dataview queries to aggregate all incomplete tasks across the vault:
## Incomplete Tasks from Daily Notes
```dataview
TABLE WITHOUT ID
text AS "Task",
choice(p, p, "Unknown") AS "Priority",
file.link AS "Source"
FROM "10-Daily"
FLATTEN file.tasks AS item
WHERE !item.completed
FLATTEN item.text AS text
FLATTEN item.p AS p
SORT choice(p, p, 99) ASC, file.name DESC
LIMIT 30
```This automatically finds every - [ ] checkbox from my daily notes and displays them sorted by priority.
Priority System
Tasks use inline fields for priority:
| Priority | Meaning | Example |
|---|---|---|
[p:: 1] |
🔴 Critical | - [ ] Fix prod bug [p:: 1] |
[p:: 2] |
🟠 High | - [ ] Review PR [p:: 2] |
[p:: 3] |
🟡 Medium | - [ ] Update docs [p:: 3] |
[p:: 4] |
🟢 Low | - [ ] Refactor code [p:: 4] |
This means I can scatter tasks throughout my daily notes, and they all roll up into a single prioritized view.
The LLM Integration: Where It Gets Powerful
Opening the Vault in Cursor
The entire vault is a Cursor workspace. This means Claude can read any note, search across files, and understand context from my work history. Coincidentally, this setup is very similar to what Anthropic made available today via Cowork.
cursor ~/workspace/obsidian/work-notes
This simple setup unlocks powerful workflows where I rarely write notes from scratch - instead, I feed raw inputs to Claude and get structured, linked outputs.
LLM-Powered Workflows
The real power of this system isn’t manual note-taking—it’s using LLMs to process raw information into structured notes. Here are my core workflows:
Workflow 1: Slack Thread → Daily Note
Instead of manually summarizing Slack discussions, I copy the thread and let Claude do the work.
Input: Raw Slack thread (copy-pasted or via MCP Slack integration)
Prompt:
Here's a Slack thread about the data pipeline migration.
Summarize this into my daily note format:
1. Extract the key decisions made
2. List any action items with owners
3. Note any blockers or risks mentioned
4. Add relevant links
Format it for the Work Log section of today's daily note.
Output: Structured summary ready to paste:
### Slack: Data Pipeline Migration Discussion
**Thread:** [#data-platform](https://slack.com/...) — 2026-01-06
**Key Decisions:**
- Agreed to use Flink over Spark Streaming for stateful processing
- Schema freeze pushed to Friday to accommodate late feedback
- Will run both pipelines in parallel for 2 weeks before cutover
**Action Items:**
- [ ] @alice — Update capacity estimates by Wednesday [p:: 2]
- [ ] @bob — Set up parallel pipeline environment [p:: 1]
- [ ] @carol — Draft rollback plan for review [p:: 2]
**Blockers:**
- Waiting on infra team approval for new Kafka cluster
**Links:**
- [Original thread](https://slack.com/...)
- [[ML Pipeline Refactor]] — Parent projectTime saved: 15 minutes → 2 minutes
Workflow 2: Meeting Transcript → Structured Notes
After a meeting, I often have a raw transcript (from Teams, Otter.ai, or manual notes). Claude transforms this into actionable documentation.
Input: Raw meeting transcript (stored in 40-Resources/meeting-transcripts/)
Prompt:
Here's the transcript from today's architecture review meeting.
Create a meeting note following my template (80-Templates/meeting.md).
Include:
1. Executive summary (2-3 sentences)
2. Key decisions with rationale
3. Action items with owners and due dates (as checkboxes)
4. Open questions that need follow-up
5. Links to any documents or resources mentioned
Also extract any tasks that should go in today's daily note.
Output: Two artifacts:
1. Meeting Note (40-Resources/meetings/2026-01-06-architecture-review.md):
---
type: meeting
date: 2026-01-06
project: "[[ML Pipeline Refactor]]"
attendees: [Alice Chen, Bob Martinez, Carol Singh, David Kim]
tags:
- meeting
- architecture
---
# Architecture Review: Streaming Pipeline
## Executive Summary
Reviewed the proposed Flink architecture for the ML pipeline migration.
Approved the stateful processing approach with a 2-week parallel run period.
Identified capacity planning as the main risk to timeline.
## Attendees
- Alice Chen (Tech Lead)
- Bob Martinez (Data Engineer)
- Carol Singh (ML Engineer)
- David Kim (Infra)
## Key Decisions
| Decision | Rationale | Owner |
|----------|-----------|-------|
| Use Flink over Spark Streaming | Better exactly-once semantics, native Kafka integration | Alice |
| 2-week parallel run | Validate data quality before cutover | Bob |
| Schema freeze by Friday | Accommodate late feedback from ML team | Carol |
## Action Items
- [ ] @alice — Finalize Flink job configurations — Due: 2026-01-08
- [ ] @bob — Set up parallel pipeline environment — Due: 2026-01-10
- [ ] @david — Submit capacity request for 3 additional nodes — Due: 2026-01-07
- [ ] @carol — Update downstream consumers about schema changes — Due: 2026-01-09
## Open Questions
1. What's the fallback if capacity request is denied?
2. How do we handle late-arriving data during parallel run?
3. Should we notify customers about potential latency changes?
## Follow-ups
- [[2026-01-08]] — Capacity request decision expected
- [[ML Pipeline Refactor]] — Update project status
## References
- [Architecture Diagram](https://docs.google.com/...)
- [Capacity Planning Sheet](https://docs.google.com/...)2. Tasks for Daily Note:
## 📋 Tasks from Architecture Review
- [ ] Review Flink job configurations before Friday [p:: 2]
- [ ] Follow up on capacity request with David [p:: 1]Time saved: 30 minutes → 5 minutes
Workflow 3: Weekly (or Daily) Status Generation
At the end of each week (or daily), Claude reads all my daily notes and generates a status report. The prompt below is usually just a starting point that I modify based on the expected format for a given standup or status meeting. For my daily standups, for example, I use a dedicated slash command in Cursor to generate the status report.
Prompt:
Based on this week's daily notes (2026-01-06 through 2026-01-10),
generate a weekly status update for the ML Pipeline Refactor project.
Format:
1. **Accomplishments** — What got done (bullet points)
2. **In Progress** — What's actively being worked on
3. **Blockers** — What's preventing progress
4. **Next Week** — Planned focus areas
5. **Metrics** — Any relevant numbers (optional)
Keep it concise—this goes to leadership.
Output:
## ML Pipeline Refactor — Week of Jan 6, 2026
### ✅ Accomplishments
- Completed Flink job design and configuration review
- Set up parallel pipeline environment for validation
- Finalized schema changes with ML team (freeze complete)
- Documented migration workflow and rollback procedures
### 🔄 In Progress
- Parallel run validation (started Jan 10, running for 2 weeks)
- Capacity planning approval (submitted, awaiting response)
- Downstream consumer notifications
### 🚧 Blockers
- **Capacity approval pending** — Need 3 additional nodes; decision expected Monday
- **Late schema feedback** — Pushed freeze by 2 days, absorbed into timeline
### 📅 Next Week
- Monitor parallel run data quality metrics
- Begin stakeholder communication about cutover timeline
- Draft customer notification if latency changes expected
### 📊 Metrics
- Data freshness: 24h → targeting <5min post-migration
- Parallel run started: 0/14 days completeTime saved: 45 minutes → 5 minutes
Workflow 4: Design Proposal Drafting
For new features or tools, I describe the problem and let Claude draft the initial proposal.
Prompt:
I want to build a voice cloning TTS system that:
- Runs on my Linux desktop with GPU
- Is accessible from my Mac via API
- Integrates with Cursor via MCP server
- Uses my cloned voice
Draft a design proposal covering:
1. Architecture diagram
2. Technology choices with pros/cons
3. Implementation phases
4. Hardware requirements
5. Open questions
Store it in 40-Resources/design_proposals/
Output: A comprehensive design document ready for iteration.
This is how I created proposals like (see this Github repo):
voice-cloning-tts-mcp-server.mdai-meeting-assistant-teams.mdcursor-config-sync.md
Time saved: 2 hours → 20 minutes (for first draft)
Workflow Summary
| Workflow | Input | Output | Time Saved |
|---|---|---|---|
| Slack → Daily Note | Raw thread | Structured summary + action items | 15 min → 2 min |
| Transcript → Meeting Note | Raw transcript | Formatted note + tasks | 30 min → 5 min |
| Daily Notes → Status | Week’s notes | Leadership-ready update | 45 min → 5 min |
| Problem → Design Proposal | Requirements | Comprehensive proposal | 2 hr → 20 min |
The key insight: I rarely write notes from scratch. I capture raw inputs (transcripts, threads, ideas) and let LLMs structure them.
Resources Folder: The Reference Library
The 40-Resources/ folder is where supporting material lives:
Workflows
Detailed process documentation that Claude helped write:
# Spark to Flink Migration Workflow
## Pipeline Overview
```mermaid
graph TD
A[Kafka Topics] -->|Consume| B[Flink Jobs]
B -->|Transform| C[Feature Store]
C -->|Serve| D[ML Models]
```
## Step 1: Prepare Input Schemas
The input topics should contain:
| Field | Type | Required |
|-------|------|----------|
| `event_id` | STRING | ✅ |
| `timestamp` | TIMESTAMP | ✅ |
| `payload` | JSON | ✅ |
[... comprehensive documentation continues ...]Scripts
Python automation that lives with my notes:
40-Resources/scripts/
├── schema_validator.py # Validate data schemas
├── generate_test_data.py # Create test fixtures
└── output/ # Generated artifacts
├── migration_report.csv
└── validation_results.json
Data Files
CSV files, datasets, and reports:
40-Resources/data/
├── schema_inventory.csv
├── migration_status.csv
├── performance_benchmarks.csv
└── capacity_planning.md
Images
Screenshots and diagrams, organized by date or project:
40-Resources/images/
├── daily/
│ └── 2025-12-15/
│ └── architecture_diagram.png
└── projects/
└── ml-pipeline/
└── data_flow_diagram.png
The Linking Strategy
One of the key features of Obsidian is the ability to link notes to each other. This is done via the use of double brackets, e.g. [[ML Pipeline Refactor]]. This allows for a lot of flexibility in how notes can be linked to each other. As a nice benefit of linked notes is the knowledge graph view Obsidian provides out of the box, which can be used to navigate the notes and their relationships.
Daily → Project Links
Every daily note includes projects_touched in frontmatter:
projects_touched:
- "[[ML Pipeline Refactor]]"
- "[[API Gateway Migration]]"Bidirectional References
Project notes link back to relevant daily notes:
## Links
- [[2025-12-11]] — Initial project setup
- [[2026-01-06]] — Migration workflow documentedResource Links
Workflows and docs link to source notes:
## Related
- Project: [[ML Pipeline Refactor]]
- Daily: [[2026-01-06]] — When this was createdThis creates a web of connections that makes context retrieval easy—both for me and for Claude.
Essential Obsidian Plugins
| Plugin | Purpose |
|---|---|
| Templater | Template variables like {date}, date math |
| Dataview | SQL-like queries to aggregate tasks, filter notes |
| Calendar | Visual daily note navigation |
| Periodic Notes | Auto-create daily notes with templates |
| Tasks | Enhanced checkbox handling |
Dataview is the Secret Weapon
Dataview lets me query my notes like a database:
// List all high-priority incomplete tasks
TABLE file.link AS "Source", text AS "Task"
FROM "10-Daily"
FLATTEN file.tasks AS item
WHERE !item.completed AND item.p = 1
// Recent project updates
LIST
FROM "20-Projects"
WHERE status = "active"
SORT file.mtime DESC
The Daily Workflow
Morning (~5 min)
- Open today’s daily note (auto-created by Periodic Notes)
- Review yesterday’s carryover tasks
- Set focus areas for the day
During the Day
- Log work under Work Log as I go
- Add tasks with
- [ ]and priorities - Paste links to Slack threads, docs, PRs
- Take meeting notes inline or in separate meeting notes
End of Day (~5 min)
- Mark completed tasks with
[x] - Review blockers section
- Update project notes if significant progress
- Summarize today’s note into bulleted daily status updates for the standup meeting
Weekly (~15 min)
- Use Claude to generate status updates from daily notes
- Update project “Current Status” sections
- Review aggregated task list in
30-Areas/tasks.md
Future Work: AI Meeting Participation
The ultimate extension of this system is having an AI assistant that can participate directly in (standup) meetings on my behalf.
I’m designing an AI meeting assistant that:
- Joins Microsoft Teams meetings as a bot
- Continuously transcribes all spoken audio with speaker diarization
- Detects when my input is needed (name mentioned, direct questions, relevant topics)
- Generates context-aware responses using my Obsidian knowledge base via RAG
- Speaks using my cloned voice with sub-1-second latency
Architecture Overview
┌────────────────────────────────────────────────────────────────────────────────┐
│ Microsoft Teams Meeting │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Participant │ │ Participant │ │ AI Bot │ │ Participant │ │
│ │ A │ │ B │ │ (Me/Proxy) │ │ C │ │
│ └─────────────┘ └─────────────┘ └──────┬──────┘ └─────────────┘ │
└───────────────────────────────────────────────┼─────────────────────────────────┘
│
▼
┌────────────────────────────────────────────────────────────────────────────────┐
│ Linux GPU Server (Inference + MCP) │
│ │
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
│ │ Inference Pipeline │ │
│ │ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │ │
│ │ │ ASR Engine │ │ LLM Engine │ │ TTS Engine │ │ │
│ │ │ (Streaming) │ │ + RAG │ │ (Voice Clone) │ │ │
│ │ │ ~80ms │ │ ~400ms │ │ ~200ms │ │ │
│ │ └────────────────┘ └────────────────┘ └────────────────┘ │ │
│ └─────────────────────────────────────────────────────────────────────────┘ │
│ │
│ Total response latency target: <1 second │
└────────────────────────────────────────────────────────────────────────────────┘
Key Components
| Component | Purpose | Target Latency |
|---|---|---|
| Streaming ASR | Real-time transcription with speaker diarization | ~80ms |
| Trigger Detection | Detect when my participation is needed | ~20ms |
| RAG Retrieval | Query Obsidian vault for relevant context | ~100ms |
| LLM Response | Generate contextually appropriate response | ~400ms |
| Voice Clone TTS | Synthesize response in my cloned voice | ~200ms |
MCP Server Integration
The meeting assistant exposes tools via MCP that Cursor can invoke:
@server.list_tools()
async def list_tools():
return [
Tool(name="join_meeting", description="Join a Teams meeting"),
Tool(name="leave_meeting", description="Leave the current meeting"),
Tool(name="speak", description="Speak text using cloned voice"),
Tool(name="get_transcript", description="Get meeting transcript"),
Tool(name="get_participants", description="List meeting participants"),
Tool(name="set_trigger_keywords", description="Set response triggers"),
Tool(name="enable_auto_respond", description="Toggle automatic responses"),
]Use Cases
- Passive Monitoring — AI listens and builds a live transcript I can query later
- Alert Mode — Notify me (via notification) when specific topics come up
- Auto-Respond — Autonomously answer questions about my projects/expertise
- Full Proxy — Attend meetings entirely on my behalf
This represents the natural evolution of the LLM-powered notes system: from processing meeting outputs (transcripts) to participating in meetings directly.
Key Takeaways
- Daily notes are the foundation — Capture everything chronologically, link later
- PARA keeps things organized — Projects, Areas, Resources, Archive
- Frontmatter is for machines — YAML metadata makes notes LLM-parseable
- Dataview aggregates — Tasks, status, activity all computed automatically
- Cursor + Obsidian = power combo — Full vault context available to Claude
- Don’t write, process — Feed raw inputs (threads, transcripts) to LLMs for structuring
- Scripts live with notes —
40-Resources/scripts/keeps automation close
The goal isn’t perfect organization—it’s minimum friction capture with maximum LLM leverage for processing.
Resources
- Obsidian — The note-taking app
- PARA Method — Tiago Forte’s organization system
- Dataview Plugin — Query your notes
- Templater Plugin — Advanced templates
- Model Context Protocol — For MCP server integration
- Teams AI Library with MCP — Microsoft’s MCP support for Teams bots
Hopefully, posts like this one inspire you to up your note-taking game and take advantage of the power of LLMs to help you capture and organize your work.
In the meantime, keep feeding your LLM!
Daniel