What We’ll Build
A research agent using LangGraph (Python) that:
- Receives a topic as input
- Runs a researcher node to gather key facts
- Runs a reporter node to expand findings into a markdown report
Prerequisites
- Python 3.11+
- LangGraph installed (
pip install langgraph langchain-openai)
- Crewship CLI installed and authenticated
Want to skip the setup? Clone the langgraph-quickstart repo and run crewship deploy to get a working agent deployed in minutes.
Step 1: Create the Project
mkdir research-agent && cd research-agent
mkdir -p src/research_agent
touch src/research_agent/__init__.py
Your project structure:
research-agent/
├── src/
│ └── research_agent/
│ ├── __init__.py
│ └── graph.py
├── langgraph.json
├── pyproject.toml
└── crewship.toml
Step 2: Define Your Graph
Create src/research_agent/graph.py:
from typing import TypedDict
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph
class State(TypedDict):
topic: str
research: str
report: str
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)
def researcher(state: State) -> dict:
"""Research a topic and produce bullet-point notes."""
response = llm.invoke([
SystemMessage(content=(
"You are a senior researcher. Given a topic, produce 10 concise bullet "
"points covering the most important facts, recent developments, and key "
"insights. Return only the bullet list."
)),
SystemMessage(content=f"Topic: {state['topic']}"),
])
return {"research": response.content}
def reporter(state: State) -> dict:
"""Expand research notes into a polished markdown report."""
response = llm.invoke([
SystemMessage(content=(
"You are a senior reporting analyst. Given research bullet points, "
"expand them into a well-structured markdown report with an introduction, "
"detailed sections, and a conclusion."
)),
SystemMessage(content=f"Research notes:\n{state['research']}"),
])
return {"report": response.content}
builder = StateGraph(State)
builder.add_node("researcher", researcher)
builder.add_node("reporter", reporter)
builder.set_entry_point("researcher")
builder.add_edge("researcher", "reporter")
builder.set_finish_point("reporter")
graph = builder.compile()
The compiled graph object is what Crewship invokes. Your input (e.g., {"topic": "quantum computing"}) becomes the initial state.
Step 3: Add langgraph.json
Create langgraph.json in the project root — this lets crewship init auto-detect the framework:
{
"graphs": {
"agent": "./src/research_agent/graph.py:graph"
}
}
Step 4: Add Dependencies
Create pyproject.toml:
[project]
name = "research-agent"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = [
"langgraph>=0.2.0",
"langchain-openai>=0.2.0",
]
Step 5: Add Crewship Configuration
Run crewship init to auto-generate the config — or create it manually:
[deployment]
framework = "langgraph"
entrypoint = "src.research_agent.graph:graph"
python = "3.11"
[build]
exclude = ["tests"]
[runtime]
timeout = 300
memory = 512
The entrypoint uses Python module path format: dots for directories, colon before the variable name.
Step 6: Set Environment Variables
crewship env set OPENAI_API_KEY=sk-proj-...
Step 7: Deploy
📦 Packaging agent...
☁️ Uploading build context...
🔨 Building image...
✅ Deployed successfully!
Deployment: dep_abc123xyz
Project: research-agent
Step 8: Run Your Agent
crewship invoke --input '{"topic": "quantum computing"}' --stream
Watch the execution:
▶ Run started: run_xyz789
├─ [10:30:01] Starting graph execution
├─ [10:30:02] Node: researcher starting...
├─ [10:30:12] Node: researcher completed
├─ [10:30:13] Node: reporter starting...
├─ [10:30:45] Node: reporter completed
✅ Run completed in 44.8s
Step 9: Access the Output
The run result is the final graph state — a dictionary with all state keys:
# Get the run result
curl https://api.crewship.dev/v1/runs/run_xyz789 \
-H "Authorization: Bearer YOUR_API_KEY"
{
"status": "succeeded",
"result": {
"topic": "quantum computing",
"research": "• Quantum computers use qubits...",
"report": "# Quantum Computing\n\n## Introduction\n..."
}
}
To save the report as an artifact, write it to the artifacts/ directory inside your node:def reporter(state: State) -> dict:
response = llm.invoke([...])
with open("artifacts/report.md", "w") as f:
f.write(response.content)
return {"report": response.content}
Next Steps