Model Integration Notes

How to apply Real Science prompt packs across different LLM workflows

Overview

Model Integration Notes explain how to use Real Science prompt packs in actual language-model workflows. The goal is to preserve canonical terminology, corridor boundaries, and output structure while still allowing practical use across different AI systems.

Three main integration styles

Manual chat loading

Saved instruction loading

Artifact-assisted loading

Recommended integration hierarchy

Canonical terminology pack
→ corridor prompt pack
→ task-specific instruction
→ output format rule

Manual chat loading

For direct chat use, paste the smallest relevant context block first, then declare the active corridor, then state the task clearly.

This is the simplest method and works well for iterative development.

Saved instruction loading

For repeated workflows, keep prompt packs in reusable files and paste them into sessions as needed. This reduces drift and helps maintain consistency across multiple conversations.

Artifact-assisted loading

When working from local notes, archive files, or encyclopedia pages, use those artifacts as the stable source and add only the minimum extra instructions needed for the current task.

Integration rule

Load only what is needed, but always load enough to preserve structure.

Too little context causes drift. Too much context can blur the active corridor.