This guide walks you through creating a complete workflow from scratch.
What We’ll Build
A document processing pipeline that:
Extracts text from images (OCR)
Identifies named entities (NER)
Summarizes the content (LLM)
Stores results in a dataset
graph LR
A[Image Input] --> B[OCR]
B --> C[NER]
C --> D[LLM Summary]
D --> E[Output Dataset]
Step 1: Create the Workflow
Using the Web Platform
Navigate to Workflows in the sidebar
Click New Workflow
Enter details:
Name: “Document Processing Pipeline”
Description: “Extract, analyze, and summarize documents”
Click Create
You’ll see the workflow canvas with a blank version ready to edit.
Using the Python SDK
fromseemeimportClientclient=Client()## Create the workflowworkflow=client.create_workflow(name="Document Processing Pipeline",description="Extract, analyze, and summarize documents")print(f"Workflow created: {workflow.id}")# A version is created automatically, or create one explicitlyversion=client.create_workflow_version(workflow_id=workflow.id,name="v1")print(f"Version created: {version.id}")
Using the REST API
# Create the workflowcurl -X POST "https://api.seeme.ai/api/v1/workflows"\
-H "Authorization: myusername:my-api-key"\
-H "Content-Type: application/json"\
-d '{
"name": "Document Processing Pipeline",
"description": "Extract, analyze, and summarize documents"
}'# Create a workflow versioncurl -X POST "https://api.seeme.ai/api/v1/workflows/{workflow_id}/versions"\
-H "Authorization: myusername:my-api-key"\
-H "Content-Type: application/json"\
-d '{"name": "v1"}'
Step 2: Add Model Nodes
Add OCR Node
From the node palette, drag Model onto the canvas
Click the node to configure:
Name: “Extract Text (OCR)”
Model: Select your OCR model
Input Template: {{input}}
Position it as the first step
# Assume you have an OCR modelocr_model=client.get_model("your-ocr-model-id")ocr_node=client.create_workflow_node(version_id=version.id,name="Extract Text (OCR)",entity_type="model",entity_id=ocr_model.id,config={"input_template":"{{input}}"},position={"x":100,"y":100})
Summarize this document:
Text: {{ocr_node}}
Entities found: {{ner_node}}
Provide a 2-3 sentence summary.
llm_model=client.get_model("your-llm-model-id")llm_node=client.create_workflow_node(version_id=version.id,name="Generate Summary",entity_type="model",entity_id=llm_model.id,config={"input_template":f"""
Summarize this document:
Text: {{{{{ocr_node.id}}}}}Entities found: {{{{{ner_node.id}}}}}Provide a 2-3 sentence summary.
"""},position={"x":500,"y":100})
Step 3: Add Output Dataset
Drag a Dataset node onto the canvas
Configure:
Name: “Store Results”
Dataset: Select or create an output dataset
Output Configuration:
Map text → OCR output
Map entities → NER output
Map summary → LLM output
# Create or get output datasetoutput_dataset=client.create_dataset(name="Document Analysis Results",content_type="text")output_node=client.create_workflow_node(version_id=version.id,name="Store Results",entity_type="dataset",entity_id=output_dataset.id,config={"output_dataset_id":output_dataset.id,"output_version_id":output_dataset.versions[0].id,"column_mapping":{"text":f"{{{{{ocr_node.id}}}}}","entities":f"{{{{{ner_node.id}}}}}","summary":f"{{{{{llm_node.id}}}}}"}},position={"x":700,"y":100})
Step 4: Connect Nodes with Edges
Draw Edges
Hover over a node to see connection points
Click and drag from the output port to the input port of the next node
Create edges:
OCR → NER (data edge)
NER → LLM (data edge)
LLM → Dataset (data edge)
The canvas should show a connected flow from left to right.
# Execute with a test fileexecution=client.execute_workflow(workflow_id=workflow.id,version_id=version.id,input_mode="single",item="./test-document.png")# Monitor progressimporttimewhileexecution.statusin["pending","running"]:time.sleep(2)execution=client.get_workflow_execution(workflow_id=workflow.id,execution_id=execution.id)print(f"Status: {execution.status}")ifexecution.progress:print(f" Node: {execution.progress.current_node_id}")# View resultsprint("\n=== Results ===")print(execution.results)
# Execute workflow with a filecurl -X POST "https://api.seeme.ai/api/v1/workflows/{workflow_id}/execute"\
-H "Authorization: myusername:my-api-key"\
-F "file=@./test-document.png"\
-F "version_id={version_id}"\
-F "input_mode=single"# Get execution statuscurl -X GET "https://api.seeme.ai/api/v1/workflows/{workflow_id}/executions/{execution_id}"\
-H "Authorization: myusername:my-api-key"# Get execution resultscurl -X GET "https://api.seeme.ai/api/v1/workflows/{workflow_id}/executions/{execution_id}/results"\
-H "Authorization: myusername:my-api-key"
Step 6: Activate the Version
Once tested, activate the version for production use:
Click Activate button
Confirm activation
This version is now the default for executions
# Activate the tested versionclient.activate_workflow_version(workflow_id=workflow.id,version_id=version.id)print(f"Version {version.name} is now active")
# Activate the workflow versioncurl -X POST "https://api.seeme.ai/api/v1/workflows/{workflow_id}/versions/{version_id}/activate"\
-H "Authorization: myusername:my-api-key"# List workflow versionscurl -X GET "https://api.seeme.ai/api/v1/workflows/{workflow_id}/versions"\
-H "Authorization: myusername:my-api-key"