Skip to content

Commit f28dc53

Browse files
committed
docs(doubleword): add main_async() sample for 1h flex tier
Mirror the existing main_batch() block with main_async(), using autobatcher.AsyncOpenAI (1h flex tier) instead of BatchOpenAI (24h). Updates the docstring header to list all three execution modes.
1 parent 4a1aa33 commit f28dc53

1 file changed

Lines changed: 33 additions & 0 deletions

File tree

python/samples/02-agents/chat_client/doubleword_chat_client.py

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,11 @@
1111
Since Doubleword exposes an OpenAI-compatible API, you can use the built-in
1212
OpenAIChatCompletionClient with a custom base URL.
1313
14+
Three execution modes are demonstrated:
15+
- main() — realtime (priority tier)
16+
- main_async() — 1-hour async (flex tier, mid-tier cost)
17+
- main_batch() — 24-hour batch (deepest discount)
18+
1419
Setup:
1520
pip install agent-framework-openai
1621
export DOUBLEWORD_API_KEY="your-api-key"
@@ -40,9 +45,37 @@ async def main() -> None:
4045
print(f"Assistant: {response}")
4146

4247

48+
async def main_async() -> None:
49+
"""Run requests on the 1-hour async (flex) tier using autobatcher.
50+
51+
Mid-tier cost between realtime and 24-hour batch — use when next-day
52+
batch turnaround is too slow but realtime is too expensive.
53+
54+
Install: pip install autobatcher
55+
See: https://pypi.org/project/autobatcher/
56+
"""
57+
from autobatcher import AsyncOpenAI
58+
59+
client = OpenAIChatCompletionClient(
60+
model="Qwen/Qwen3.5-397B-A17B-FP8",
61+
async_client=AsyncOpenAI(
62+
api_key=os.environ["DOUBLEWORD_API_KEY"],
63+
base_url="https://api.doubleword.ai/v1",
64+
),
65+
)
66+
67+
message = Message("user", contents=["Explain the benefits of an AI model gateway in one paragraph."])
68+
print(f"User: {message.text}")
69+
70+
response = await client.get_response([message], stream=False)
71+
print(f"Assistant: {response}")
72+
73+
4374
async def main_batch() -> None:
4475
"""Run batch requests at reduced cost using autobatcher.
4576
77+
24-hour batch tier — deepest discount (up to ~90% off realtime).
78+
4679
Install: pip install autobatcher
4780
See: https://pypi.org/project/autobatcher/
4881
"""

0 commit comments

Comments
 (0)