Knowledge Management
in the Age of AI

Capture decisions. Keep them. Make them findable.

Maciej Jankowski · April 2026

How often does your team
re-decide
something that was already decided?

The decision happened. The logic was sound. The answer is in someone's Slack thread,
someone's head, or someone's last day.

Brand24 makes decisions,
but the decisions don't remember themselves.

The same questions resurface because the answers live in someone's Slack thread, someone's head, or someone's last day. When people leave, knowledge leaves with them.

This is not a Brand24 problem. It is a company-at-scale problem. Everyone has it. Almost nobody solves it cleanly.

Each one misses the real issue.

Approach Why it doesn't stick
Confluence / Notion Second place to look. People default to Slack. Wiki decays within 90 days.
"Document everything" Writing docs is overhead. Nobody writes them under deadline pressure. They go stale.
Record meetings 45-minute recordings nobody rewatches. The decision is in minute 37.
AI search Searching garbage returns garbage faster. The input quality problem is upstream.

The core issue is not storage. It is capture discipline - making it effortless to log the decision at the moment it's made, and effortless to find it at the moment it's needed.

The architecture

CAPTURE ───────▶ MEMORY ───────▶ RETRIEVAL (where decisions) (where they (how people enter the system live forever) find them)

Layer 1

The Decision Card inside Slack. 60-90 seconds. One template. No new tool.

Layer 2

A knowledge graph that links decisions to each other, to people, to topics.

Layer 3

AI retrieval: search, contextual suggestions, onboarding digests.

The Decision Card

Fields evolved empirically from my personal framework - each field added after a specific mistake made me wish I had that field last time.

What we decidedSwitch billing to Paddle
Why (rationale)EU VAT handling, 15h/mo saved
AlternativesStripe Tax, manual handling
Second choiceStripe Tax if price drops
Who decided@tomek, @ania, approved @michal
Confidence0.85
Impacthigh every invoice
Reversibility2-way door, 4-6wk
Evidence3 peer companies migrated
RisksFees scale, support unknown
PerspectivesFinance, Eng, CS
Review date2026-10-10 (6mo)

Slack command: /decision. Posts to #decisions. Writes to memory layer. 60-90 seconds. Zero new tools.

Impact × Reversibility

Bezos's 1-way / 2-way door doctrine. Most teams over-deliberate on reversible decisions and under-deliberate on irreversible ones. The card makes the distinction explicit at the moment of deciding.

2-way door
(reversible)
1-way door
(hard to reverse)
Low impact
Decide fast, don't card it (noise)
Card it, 5-min card is enough
High impact
Card it, decide in a day
Full analysis required.
Red team mandatory.
PARDES five-reader.

Dual format by design

Human-readable

Markdown. Browsable in Slack, GitHub, wiki. Humans scroll.

## DEC-047: Switch to Paddle
Why: EU VAT handling
Confidence: 0.85
Impact: high / 2-way door

Machine-readable

JSONL. Queryable by AI, indexable by vector search, exportable. AI retrieves.

{"id":"DEC-047",
 "decided":"Switch to Paddle",
 "confidence":0.85,
 "impact":"high",
 "reversibility":"2-way"}

Same source. Two reading modes. Neither is a second-class citizen.

Decisions become a graph

Text is searchable. A graph is relatable. Same decisions, but now they connect.

NodeEdges
Decisionsupersedes, depends_on, contradicts
Personmade_by
Teamowns
Topictopic

What a graph gives you:

  • "Show me everything @tomek decided"
  • "What does Paddle migration depend on?"
  • "What did we supersede? Why?"
  • Dedup at capture - "existing DEC-047 found, update?"

Real-world reference: MemPalace (github.com/MemPalace/mempalace) - an open-source semantic graph MCP server. I run a loaded instance with 20,564 entries across typed wings and rooms. Architecture works at real scale with real data.

The decision timeline

2026-Q1 2026-Q2 ──●────────●────────●──────────●────────●── │ │ │ │ │ │ │ │ │ └─ DEC-052: API rate limits │ │ │ │ confidence: 0.9 / impact: med / 2-way │ │ │ │ │ │ │ └─ DEC-047: Switch to Paddle [ACTIVE] │ │ │ supersedes DEC-031 │ │ │ confidence: 0.85 / impact: high / 2-way │ │ │ review: 2026-10-10 │ │ │ │ │ └─ DEC-031: Stay on Stripe [SUPERSEDED] │ │ │ └─ DEC-028: Hire senior backend (1-way door) │ └─ DEC-019: Kill the mobile app (1-way door, PARDES ran)

Killer filter for the CEO: "high impact + 1-way door + confidence below 0.8" - the decisions most likely to hurt the company. That view alone justifies the entire system.

AI earns its keep here

Not as a chatbot. As a search interface that understands context.

Direct search

"What did we decide about billing?" → Returns the card, not a Slack thread.

3 seconds vs. 15 minutes

Contextual suggestion

Someone opens a Slack thread about billing → system surfaces "Related: DEC-047, made 2026-04-10."

Prevents re-litigation

Onboarding digest

New hire → "Here are the 12 active decisions in your domain. 3 up for review this quarter."

Day 1 context, not month 3

The AI doesn't just store - it challenges

Three checks run automatically when a Decision Card is filled:

1. Prior decision search

Has this topic been decided? If yes, surface it with alternatives, confidence, review date.

2. Adversary test

AI generates the strongest counter-argument to the chosen option. Not generic doubt - specific steel-manned objection.

3. Bias scan

Flags sunk cost, groupthink, planning fallacy, bandwagon, urgency bias patterns in the card text.

Example adversary test: "You chose Paddle for VAT. Stripe Tax launched a flat-rate EU option in March. At your volume, that's $1,500/mo vs Paddle's 5%. Have you priced Stripe Tax at current rates, not the rates from when this was last discussed?"

Tier 2: When the decision defines the company

For the 5-10 decisions per quarter that shape the trajectory. Pricing changes, market entry, senior hires, tech stack.

PARDES five-reader engine

  • Surface - what the data literally says
  • Pattern - converging signals
  • Interpretation - hidden assumptions, cui bono
  • Adversary - strongest counter
  • Emergence - what combining reveals

Structured dissent (human layer)

  • Delegation Poker - who actually decides? (7 levels, Management 3.0)
  • Red Team - one person assigned to argue against. Role rotates weekly. (US Army doctrine)
  • Concern Round - each person answers "remaining concern? or 'none'?" (Quaker, sociocracy)

The AI catches logical flaws. Human protocols catch the silence - the person who saw the problem but didn't speak.

90-day roadmap

Week Action Outcome
1-2 Build Slack /decision workflow. Deploy to one team. Capture mechanism live. First 20-30 cards logged.
3-4 Collect feedback. Adjust fields. Deploy to 2-3 teams. ~100 cards. First "I found it in the log" moments.
5-8 Connect AI retrieval. Contextual suggestions in Slack. Search + suggestions active. Repeat-discussion rate measurable.
9-12 Build topic graph. Generate onboarding digests. First knowledge audit. Full system operational. Baseline retention metrics.

Recommendation: Start minimal (week 1: Google Sheet + Slack bot, one team). Prove the capture habit works. Then build toward the full graph over weeks 5-12 using the data already captured. The graph is only as good as the data in it - and the data comes from the habit, not the technology.

What this is NOT

Not a wiki project

Wikis are where knowledge goes to die, maintained by the one person who cares until they leave.

Not an AI chatbot

Chatbots answer questions about knowledge that's already been captured. This system captures the knowledge in the first place.

This IS decision infrastructure

It makes decisions visible, searchable, persistent - regardless of who made them or whether they still work here.

The AI is the retrieval layer, not the capture layer. Humans make decisions. The system makes them findable.

The adjacent product Brand24 could sell

Brand24 already sells signal detection. This toolkit captures the internal response to those signals - closing the loop from signal to action to memory.

Why Brand24 specifically

  • Existing customer base (companies that act on intelligence)
  • Existing AI infrastructure (NLP, sentiment, signal detection)
  • Internal deployment = proof-of-concept for external product
  • Natural pricing extension: listening + decision intelligence

The moat

Every tool helps companies find past decisions (Notion, Guru, Confluence). None help companies make better decisions at the moment of deciding. The adversary test, bias scan, PARDES engine, structured dissent - that's the layer nobody has built.

The product is the challenge layer, not the archive.

Turns this project from a cost center into product R&D.

Where this comes from

I've been iterating on AI-assisted decision frameworks for my own work since summer 2025. Four major versions so far. The pattern in this proposal is what survived across all of them.

summer 2025AADS (v1→v6) → 8SENS9SENSnSENSRAZEM concept Oct 2025 Oct Nov current current

Numbers I can show

300+ decisions logged across projects
137 in the largest single project log
20,564 MemPalace entries

Supporting personas

Zbigniew - cold adversarial analysis
Bozenka - fact-checking
Konrad - buyer recon
+20 others

External sources integrated

Management 3.0, US Army Red Team doctrine, Quaker decision-making, sociocracy, PARDES exegesis, special ops after-action review.

Every decision gets a card.
60 seconds. No exceptions.
The system makes it easier than the alternative.

The alternative is: someone asks the same question in 3 months,
nobody remembers, the team re-discusses for 45 minutes,
and reaches the same conclusion.

The card costs 60 seconds. The re-discussion costs 45 min × 4 people.
The math does the enforcement.

Questions?

Maciej Jankowski
maciej.artur.jankowski@gmail.com