Loading...
Loading...
Generates hierarchical knowledge graphs via Recursive Pareto Principle for optimised schema construction. Produces four-level structures (L0 meta-graph through L3 detail-graph) where each level contains 80% fewer nodes while grounding 80% of its derivative, achieving 51% coverage from 0.8% of nodes via Pareto³ compression. Use when creating domain ontologies or knowledge architectures requiring: (1) Atomic first principles with emergent composites, (2) Pareto-optimised information density, (3) Small-world topology with validated node ratios (L1:L2 2-3:1), or (4) Bidirectional construction. Integrates with graph (η≥4 validation), abduct (refactoring), mega (SuperHyperGraphs), infranodus (gap detection). Triggers: 'schema generation', 'ontology creation', 'Pareto hierarchy', 'recursive graph', 'first principles decomposition'.
npx skill4agent add zpankz/mcp-skillset rppλL.τ : Domain → OptimisedSchema via recursive Pareto compression
L0 (Meta-graph/Schema) ← 0.8% nodes → 51% coverage (Pareto³)
│ abductive generalisation
▼
L1 (Logic-graph/Atomic) ← 4% nodes → 64% coverage (Pareto²)
│ Pareto extraction
▼
L2 (Concept-graph) ← 20% nodes → 80% coverage (Pareto¹)
│ emergent clustering
▼
L3 (Detail-graph) ← 100% nodes → ground truth| Level | Role | Node % | Coverage | Ratio to L3 |
|---|---|---|---|---|
| L0 | Meta-graph/Schema | 0.8% | 51% | 6-9:1 to L1 |
| L1 | Logic-graph/Atomic | 4% | 64% | 2-3:1 to L2 |
| L2 | Concept-graph/Composite | 20% | 80% | — |
| L3 | Detail-graph/Ground-truth | 100% | 100% | — |
from rpp import RPPGenerator
# Initialize with domain text
rpp = RPPGenerator(domain="pharmacology")
# Extract ground truth (L3)
l3_graph = rpp.extract_details(corpus)# Bottom-up: L3 → L2 → L1 → L0
l2_graph = rpp.cluster_concepts(l3_graph, pareto_threshold=0.8)
l1_graph = rpp.extract_atomics(l2_graph, pareto_threshold=0.8)
l0_schema = rpp.generalise_schema(l1_graph, pareto_threshold=0.8)
# Validate ratios
rpp.validate_ratios(l0_schema, l1_graph, l2_graph, l3_graph)# Ensure small-world properties
metrics = rpp.validate_topology(
target_eta=4.0, # Edge density
target_ratio_l1_l2=(2, 3),
target_ratio_l1_l3=(6, 9)
)L3 details → cluster → L2 concepts → extract → L1 atomics → generalise → L0 schemaL0 schema → derive → L1 atomics → expand → L2 concepts → ground → L3 details┌─────────────────────────────────────────┐
│ Bottom-Up ⊗ Top-Down │
│ L3→L2→L1→L0 merge L0→L1→L2→L3 │
│ └───────→ L2 ←───────┘ │
│ convergence │
└─────────────────────────────────────────┘| Metric | Target | Validation |
|---|---|---|
| η (density) | ≥ 4.0 | |
| κ (clustering) | > 0.3 | Small-world coefficient |
| φ (isolation) | < 0.2 | No orphan nodes |
| Bridge edges | Present | Cross-level connections |
def pareto_extract(source_graph, target_ratio=0.2):
"""
Extract Pareto-optimal nodes from source graph.
Args:
source_graph: Input graph (e.g., L3 for extracting L2)
target_ratio: Target node reduction (default 20% = 0.2)
Returns:
Reduced graph with target_ratio * |source| nodes
grounding (1 - target_ratio) of semantic coverage
"""
# 1. Compute node importance (PageRank + semantic weight)
importance = compute_importance(source_graph)
# 2. Select top nodes by cumulative coverage
selected = []
coverage = 0.0
for node in sorted(importance, reverse=True):
selected.append(node)
coverage += node.coverage_contribution
if coverage >= (1 - target_ratio):
break
# 3. Verify Pareto constraint
assert len(selected) / len(source_graph) <= target_ratio
assert coverage >= (1 - target_ratio)
# 4. Build reduced graph preserving topology
return build_subgraph(selected, preserve_bridges=True)# Validate RPP topology
from graph import validate_topology
metrics = validate_topology(rpp_graph, require_eta=4.0)# Refactor schema for optimisation
from abduct import refactor_schema
l0_optimised = refactor_schema(l0_schema, target_compression=0.8)# Extend to n-SuperHyperGraphs for complex domains
from mega import extend_to_superhypergraph
shg = extend_to_superhypergraph(rpp_graph, max_hyperedge_arity=5)# Detect structural gaps
gaps = mcp__infranodus__generateContentGaps(rpp_graph.to_text())
bridges = mcp__infranodus__getGraphAndAdvice(optimize="gaps")| Principle | Application in RPP |
|---|---|
| Fractal self-similarity | Each level mirrors whole structure |
| Pareto distribution | 80/20 at each level compounds |
| Neuroplasticity | Pruning weak, amplifying strong connections |
| Free energy principle | Minimising surprise through compression |
| Critical phase transitions | Level boundaries as phase transitions |
| Power-law distribution | Node importance follows power law |
| Need | File |
|---|---|
| Level-specific construction | references/level-construction.md |
| Topology validation | references/topology-validation.md |
| Pareto algorithms | references/pareto-algorithms.md |
| Scale invariance theory | references/scale-invariance.md |
| Integration patterns | references/integration-patterns.md |
| Examples and templates | references/examples.md |
| Script | Purpose |
|---|---|
| scripts/rpp_generator.py | Core RPP graph generation |
| scripts/pareto_extract.py | Level extraction algorithm |
| scripts/validate_ratios.py | Node ratio validation |
| scripts/topology_check.py | Small-world validation |
λL.τ L3→L2→L1→L0 via Pareto extraction
80/20 → 64/4 → 51/0.8 recursive compression chain
rpp hierarchical knowledge architecture