Edge Types
Edge Types
Edges connect nodes and control how data flows through your workflow. SeeMe.ai supports two edge types: data edges and context edges.
Overview
| Edge Type | Purpose | Execution | Data Flow |
|---|---|---|---|
| Data | Main processing flow | Sequential | Output → Input |
| Context | Inject reference data | Source skipped | Injected into template |
graph LR
subgraph Data Flow
A[Model A] -->|data| B[Model B]
B -->|data| C[Model C]
end
subgraph Context Injection
D[Rules Dataset] -.->|context| B
endData Edges
Data edges define the main processing flow. The output of one node becomes the input to the next.
How Data Edges Work
- Source node executes and produces output
- Output stored in
variables[source_node_id] - Target node receives output as
{{source_node_id}}in template - Input field updated with text output
Creating Data Edges
## Connect OCR output to NER input
edge = client.create_workflow_edge(
version_id=version.id,
begin_node_id=ocr_node.id,
end_node_id=ner_node.id,
edge_type="data"
)Data Flow Example
sequenceDiagram
participant OCR as OCR Node
participant NER as NER Node
participant Ctx as Context
OCR->>Ctx: Execute, output = "John works at Acme"
Ctx->>Ctx: variables["ocr"] = "John works at Acme"
Ctx->>Ctx: input = "John works at Acme"
Ctx->>NER: Execute with input_template
NER->>Ctx: output = [{entity: "John", label: "PERSON"}]Referencing Data Edge Output
In the target node’s input_template:
ner_node = client.create_workflow_node(
version_id=version.id,
name="NER",
entity_type="model",
entity_id=ner_model.id,
config={
# Reference OCR output directly
"input_template": "{{" + ocr_node.id + "}}"
}
)Or using the current input (which holds the latest text output):
config={
"input_template": "{{input}}" # Uses current flowing input
}Context Edges
Context edges inject additional data into a node without affecting the main execution flow. The source node is not executed—it provides read-only data.
How Context Edges Work
- Source node is skipped during execution
- Data loaded from source (typically a dataset)
- Injected into target node’s template as
{{context_name}} - Main input unchanged - context is supplementary
Use Cases
| Scenario | Context Source | Usage |
|---|---|---|
| Business rules | Rules dataset | Apply rules to LLM prompts |
| Templates | Template dataset | Dynamic prompt templates |
| Reference data | Lookup dataset | Enrich with metadata |
| Examples | Few-shot examples | In-context learning |
Creating Context Edges
# Configure the source dataset node with context config
rules_node = client.create_workflow_node(
version_id=version.id,
name="Business Rules",
entity_type="dataset",
entity_id=rules_dataset.id,
config={
"context_config": {
"field_mapping": {
"rule_name": "name",
"rule_description": "description"
},
"context_name": "rules"
}
}
)
# Create context edge to the model that needs the rules
edge = client.create_workflow_edge(
version_id=version.id,
begin_node_id=rules_node.id,
end_node_id=llm_node.id,
edge_type="context" # Context, not data!
)Using Context in Templates
The context data is available via {{context_name}}:
llm_node = client.create_workflow_node(
version_id=version.id,
name="Apply Rules",
entity_type="model",
entity_id=llm_model.id,
config={
"input_template": """
You must follow these business rules:
{{#each rules}}
- {{rule_name}}: {{rule_description}}
{{/each}}
Now analyze the following text according to these rules:
{{input}}
"""
}
)Context Iteration
For cases where you need to run the model once per context item:
config={
"context_config": {
"field_mapping": {"prompt": "text"},
"context_name": "prompts",
"iterate_context": True, # Run once per context item
"max_parallel": 5 # Concurrent executions
}
}Behavior with iterate_context: True:
- LLM runs N times (once per context item)
- Results aggregated into array
- Useful for batch processing with different prompts
Comparison
| Aspect | Data Edge | Context Edge |
|---|---|---|
| Source executes | Yes | No (skipped) |
| Affects input | Yes | No |
| Access in template | {{node_id}} | {{context_name}} |
| Typical source | Model | Dataset |
| Purpose | Main flow | Supplementary data |
Visual Representation
In the workflow editor:
- Data edges: Solid lines with arrow
- Context edges: Dashed lines with circle indicator
graph LR
A[OCR] -->|data| B[NER]
B -->|data| C[LLM]
D[Rules] -.->|context| C
E[Examples] -.->|context| CComplex Example: Document Analysis with Rules
from seeme import Client
client = Client()
# Create workflow
workflow = client.create_workflow(name="Rule-Based Document Analysis")
version = workflow.versions[0]
# Main processing nodes
ocr_node = client.create_workflow_node(
version_id=version.id,
name="Extract Text",
entity_type="model",
entity_id=ocr_model.id,
config={"input_template": "{{input}}"}
)
ner_node = client.create_workflow_node(
version_id=version.id,
name="Find Entities",
entity_type="model",
entity_id=ner_model.id,
config={"input_template": "{{" + ocr_node.id + "}}"}
)
# Context nodes (will not execute, just provide data)
rules_node = client.create_workflow_node(
version_id=version.id,
name="Compliance Rules",
entity_type="dataset",
entity_id=rules_dataset.id,
config={
"context_config": {
"field_mapping": {"rule": "text", "severity": "level"},
"context_name": "rules"
}
}
)
examples_node = client.create_workflow_node(
version_id=version.id,
name="Few-Shot Examples",
entity_type="dataset",
entity_id=examples_dataset.id,
config={
"context_config": {
"field_mapping": {"input": "document", "output": "analysis"},
"context_name": "examples"
}
}
)
# Analysis node that uses context
llm_node = client.create_workflow_node(
version_id=version.id,
name="Compliance Analysis",
entity_type="model",
entity_id=llm_model.id,
config={
"input_template": """
You are a compliance analyst. Here are some examples of how to analyze documents:
{{#each examples}}
Document: {{input}}
Analysis: {{output}}
{{/each}}
Apply these compliance rules:
{{#each rules}}
- [{{severity}}] {{rule}}
{{/each}}
Document text:
{{""" + ocr_node.id + """}}
Entities found:
{{""" + ner_node.id + """}}
Provide your compliance analysis:
"""
}
)
# Data edges (main flow)
client.create_workflow_edge(version_id=version.id, begin_node_id=ocr_node.id, end_node_id=ner_node.id, edge_type="data")
client.create_workflow_edge(version_id=version.id, begin_node_id=ner_node.id, end_node_id=llm_node.id, edge_type="data")
# Context edges (supplementary data)
client.create_workflow_edge(version_id=version.id, begin_node_id=rules_node.id, end_node_id=llm_node.id, edge_type="context")
client.create_workflow_edge(version_id=version.id, begin_node_id=examples_node.id, end_node_id=llm_node.id, edge_type="context")Best Practices
- Use data edges for the main pipeline - Clear execution flow
- Use context edges for reference data - Rules, templates, examples
- Keep context data small - Large datasets slow down execution
- Name context clearly -
rules,examples,templates - Consider caching - Context data can be cached across executions
Common Patterns
RAG Pipeline
graph LR
A[Query] -->|data| B[Embedding]
B -->|data| C[Vector Search]
C -->|data| D[LLM]
E[Knowledge Base] -.->|context| DMulti-Rule Validation
graph LR
A[Document] -->|data| B[Validator]
C[Rules Set A] -.->|context| B
D[Rules Set B] -.->|context| B
E[Exceptions] -.->|context| B