Examples
Examples
Common workflow patterns and templates to get you started.
Document Processing
Invoice Processing Pipeline
Extract and validate invoice data from PDF documents.
graph LR
A[PDF Upload] --> B[OCR]
B --> C[NER: Extract Fields]
C --> D[LLM: Validate]
D --> E[Output Dataset]from seeme import Client
client = Client()
## Create workflow
workflow = client.create_workflow(
name="Invoice Processing",
description="Extract invoice data from PDFs"
)
version = workflow.versions[0]
# OCR node
ocr_node = client.create_workflow_node(
version_id=version.id,
name="Extract Text",
entity_type="model",
entity_id=ocr_model.id,
config={"input_template": "{{input}}"}
)
# NER node for field extraction
ner_node = client.create_workflow_node(
version_id=version.id,
name="Extract Fields",
entity_type="model",
entity_id=ner_model.id,
config={
"input_template": "{{" + ocr_node.id + "}}"
}
)
# LLM for validation and structuring
llm_node = client.create_workflow_node(
version_id=version.id,
name="Validate & Structure",
entity_type="model",
entity_id=llm_model.id,
config={
"input_template": """
Extract structured invoice data from this text:
{{""" + ocr_node.id + """}}
Entities found:
{{""" + ner_node.id + """}}
Return JSON with fields: invoice_number, date, vendor, items, total
"""
}
)
# Output to dataset
output_node = client.create_workflow_node(
version_id=version.id,
name="Store Results",
entity_type="dataset",
entity_id=output_dataset.id,
config={
"output_dataset_id": output_dataset.id,
"column_mapping": {
"raw_text": "{{" + ocr_node.id + "}}",
"entities": "{{" + ner_node.id + "}}",
"structured_data": "{{" + llm_node.id + "}}"
}
}
)
# Connect with data edges
edges = [
(ocr_node.id, ner_node.id),
(ner_node.id, llm_node.id),
(llm_node.id, output_node.id)
]
for begin, end in edges:
client.create_workflow_edge(
version_id=version.id,
begin_node_id=begin,
end_node_id=end,
edge_type="data"
)Meeting Transcription & Summary
Process audio recordings into structured meeting notes.
graph LR
A[Audio File] --> B[Speech-to-Text]
B --> C[Diarization]
C --> D[NER: Participants]
D --> E[LLM: Summary]
E --> F[Output]# STT node
stt_node = client.create_workflow_node(
version_id=version.id,
name="Transcribe",
entity_type="model",
entity_id=stt_model.id,
config={"input_template": "{{input}}", "timeout": 300}
)
# Speaker diarization
diarize_node = client.create_workflow_node(
version_id=version.id,
name="Identify Speakers",
entity_type="model",
entity_id=diarization_model.id,
config={"input_template": "{{" + stt_node.id + "}}"}
)
# Extract participant names
ner_node = client.create_workflow_node(
version_id=version.id,
name="Extract Names",
entity_type="model",
entity_id=ner_model.id,
config={"input_template": "{{" + diarize_node.id + "}}"}
)
# Generate meeting summary
summary_node = client.create_workflow_node(
version_id=version.id,
name="Generate Summary",
entity_type="model",
entity_id=llm_model.id,
config={
"input_template": """
Create meeting notes from this transcript:
{{""" + diarize_node.id + """}}
Participants identified: {{""" + ner_node.id + """}}
Include:
1. Key discussion points
2. Decisions made
3. Action items with owners
"""
}
)Multi-Model Validation
Quality Inspection with Ensemble
Use multiple models for higher confidence predictions.
graph TD
A[Product Image] --> B[Model A]
A --> C[Model B]
A --> D[Model C]
B --> E[LLM: Aggregate]
C --> E
D --> E
E --> F[Final Decision]# Three classification models
model_a_node = client.create_workflow_node(
version_id=version.id,
name="Quality Model A",
entity_type="model",
entity_id=model_a.id,
config={"input_template": "{{input}}"}
)
model_b_node = client.create_workflow_node(
version_id=version.id,
name="Quality Model B",
entity_type="model",
entity_id=model_b.id,
config={"input_template": "{{input}}"}
)
model_c_node = client.create_workflow_node(
version_id=version.id,
name="Quality Model C",
entity_type="model",
entity_id=model_c.id,
config={"input_template": "{{input}}"}
)
# Aggregation with LLM
aggregate_node = client.create_workflow_node(
version_id=version.id,
name="Aggregate Results",
entity_type="model",
entity_id=llm_model.id,
config={
"input_template": """
Three quality inspection models analyzed a product image:
Model A: {{""" + model_a_node.id + """}}
Model B: {{""" + model_b_node.id + """}}
Model C: {{""" + model_c_node.id + """}}
Based on these predictions:
1. What is the consensus verdict (pass/fail)?
2. Confidence level (high/medium/low)?
3. If there's disagreement, explain concerns.
Return JSON: {verdict, confidence, notes}
"""
}
)
# All models connect to aggregator
for model_node in [model_a_node, model_b_node, model_c_node]:
client.create_workflow_edge(
version_id=version.id,
begin_node_id=model_node.id,
end_node_id=aggregate_node.id,
edge_type="data"
)RAG (Retrieval-Augmented Generation)
Document Q&A System
Answer questions using your document knowledge base.
graph LR
A[User Question] --> B[Embedding]
B --> C[Vector Search]
D[Knowledge Base] -.->|context| E
C --> E[LLM Answer]# Embed the question
embed_node = client.create_workflow_node(
version_id=version.id,
name="Embed Question",
entity_type="model",
entity_id=embedding_model.id,
config={"input_template": "{{input}}"}
)
# Vector search (returns relevant documents)
search_node = client.create_workflow_node(
version_id=version.id,
name="Search Knowledge Base",
entity_type="model",
entity_id=search_model.id,
config={
"input_template": "{{" + embed_node.id + "}}",
"config": {"top_k": 5}
}
)
# Knowledge base context
kb_node = client.create_workflow_node(
version_id=version.id,
name="Knowledge Base",
entity_type="dataset",
entity_id=knowledge_dataset.id,
config={
"context_config": {
"field_mapping": {"content": "text", "source": "filename"},
"context_name": "documents"
}
}
)
# Generate answer
answer_node = client.create_workflow_node(
version_id=version.id,
name="Generate Answer",
entity_type="model",
entity_id=llm_model.id,
config={
"input_template": """
Answer this question using only the provided context.
Question: {{input}}
Relevant documents:
{{""" + search_node.id + """}}
If the answer isn't in the context, say "I don't have information about that."
"""
}
)
# Data edges
client.create_workflow_edge(version_id=version.id, begin_node_id=embed_node.id, end_node_id=search_node.id, edge_type="data")
client.create_workflow_edge(version_id=version.id, begin_node_id=search_node.id, end_node_id=answer_node.id, edge_type="data")
# Context edge for knowledge base
client.create_workflow_edge(version_id=version.id, begin_node_id=kb_node.id, end_node_id=answer_node.id, edge_type="context")Customer Support Automation
Ticket Classification & Response
Automatically classify and draft responses to support tickets.
graph LR
A[Support Ticket] --> B[Classify Category]
B --> C[Sentiment Analysis]
C --> D[LLM: Draft Response]
E[Response Templates] -.->|context| D
F[Product Info] -.->|context| D
D --> G[Output]# Category classification
classify_node = client.create_workflow_node(
version_id=version.id,
name="Classify Category",
entity_type="model",
entity_id=classifier_model.id,
config={"input_template": "{{input}}"}
)
# Sentiment analysis
sentiment_node = client.create_workflow_node(
version_id=version.id,
name="Analyze Sentiment",
entity_type="model",
entity_id=sentiment_model.id,
config={"input_template": "{{input}}"}
)
# Response templates context
templates_node = client.create_workflow_node(
version_id=version.id,
name="Response Templates",
entity_type="dataset",
entity_id=templates_dataset.id,
config={
"context_config": {
"field_mapping": {"category": "category", "template": "response"},
"context_name": "templates"
}
}
)
# Product info context
products_node = client.create_workflow_node(
version_id=version.id,
name="Product Info",
entity_type="dataset",
entity_id=products_dataset.id,
config={
"context_config": {
"field_mapping": {"name": "product", "details": "info"},
"context_name": "products"
}
}
)
# Generate response
response_node = client.create_workflow_node(
version_id=version.id,
name="Draft Response",
entity_type="model",
entity_id=llm_model.id,
config={
"input_template": """
Draft a customer support response.
Customer message: {{input}}
Category: {{""" + classify_node.id + """}}
Sentiment: {{""" + sentiment_node.id + """}}
Reference templates for this category:
{{#each templates}}
{{category}}: {{template}}
{{/each}}
Product information:
{{#each products}}
{{name}}: {{details}}
{{/each}}
Write a helpful, empathetic response appropriate for the sentiment detected.
"""
}
)
# Data edges
client.create_workflow_edge(version_id=version.id, begin_node_id=classify_node.id, end_node_id=sentiment_node.id, edge_type="data")
client.create_workflow_edge(version_id=version.id, begin_node_id=sentiment_node.id, end_node_id=response_node.id, edge_type="data")
# Context edges
client.create_workflow_edge(version_id=version.id, begin_node_id=templates_node.id, end_node_id=response_node.id, edge_type="context")
client.create_workflow_edge(version_id=version.id, begin_node_id=products_node.id, end_node_id=response_node.id, edge_type="context")Data Enrichment
Enrich Dataset with AI
Process an entire dataset through AI models.
# Create batch workflow
workflow = client.create_workflow(
name="Dataset Enrichment",
description="Add AI-generated fields to dataset"
)
version = workflow.versions[0]
# Input dataset node
input_node = client.create_workflow_node(
version_id=version.id,
name="Source Data",
entity_type="dataset",
entity_id=source_dataset.id,
config={
"dataset_input": {
"dataset_version_id": source_version.id,
"input_field": "description",
"operation": "iterate"
}
}
)
# Classification
classify_node = client.create_workflow_node(
version_id=version.id,
name="Classify",
entity_type="model",
entity_id=classifier.id,
config={"input_template": "{{input}}"}
)
# Sentiment
sentiment_node = client.create_workflow_node(
version_id=version.id,
name="Sentiment",
entity_type="model",
entity_id=sentiment_model.id,
config={"input_template": "{{input}}"}
)
# Summary
summary_node = client.create_workflow_node(
version_id=version.id,
name="Summarize",
entity_type="model",
entity_id=llm_model.id,
config={
"input_template": "Summarize in one sentence: {{input}}"
}
)
# Output with all enrichments
output_node = client.create_workflow_node(
version_id=version.id,
name="Store Enriched",
entity_type="dataset",
entity_id=output_dataset.id,
config={
"output_dataset_id": output_dataset.id,
"column_mapping": {
"original": "{{input}}",
"category": "{{" + classify_node.id + "}}",
"sentiment": "{{" + sentiment_node.id + "}}",
"summary": "{{" + summary_node.id + "}}"
}
}
)
# Connect all
# ... edges ...
# Execute in batch mode
execution = client.execute_workflow(
workflow_id=workflow.id,
input_mode="batch",
batch_config={
"dataset_id": source_dataset.id,
"dataset_version_id": source_version.id,
"input_field": "description",
"parallelism": 10
}
)More Patterns
| Pattern | Description | Key Nodes |
|---|---|---|
| Translation Pipeline | Detect language → Translate → Verify | LangDetect, Translate, QA |
| Content Moderation | Classify → Flag → Review | Classifier, Rules, Output |
| Lead Scoring | NER → Sentiment → Score | NER, Sentiment, LLM |
| Document Comparison | OCR both → LLM compare | OCR x2, LLM |
| Audit Trail | Process → Log → Archive | Model, Logger, Storage |
Template Library
Looking for ready-to-use templates? Check the SeeMe.ai template gallery for one-click workflow imports.