This repo contains a Python client SDK for use with the Durable Task Framework for Go and Dapr Workflow. With this SDK, you can define, schedule, and manage durable orchestrations using ordinary Python code.
Note that this project is not currently affiliated with the Durable Functions project for Azure Functions. If you are looking for a Python SDK for Durable Functions, please see this repo.
The following orchestration patterns are currently supported.
An orchestration can chain a sequence of function calls using the following syntax:
# simple activity function that returns a greeting
def hello(ctx: task.ActivityContext, name: str) -> str:
return f'Hello {name}!'
# orchestrator function that sequences the activity calls
def sequence(ctx: task.OrchestrationContext, _):
result1 = yield ctx.call_activity(hello, input='Tokyo')
result2 = yield ctx.call_activity(hello, input='Seattle')
result3 = yield ctx.call_activity(hello, input='London')
return [result1, result2, result3]
You can find the full sample here.
An orchestration can fan-out a dynamic number of function calls in parallel and then fan-in the results using the following syntax:
# activity function for getting the list of work items
def get_work_items(ctx: task.ActivityContext, _) -> List[str]:
# ...
# activity function for processing a single work item
def process_work_item(ctx: task.ActivityContext, item: str) -> int:
# ...
# orchestrator function that fans-out the work items and then fans-in the results
def orchestrator(ctx: task.OrchestrationContext, _):
# the number of work-items is unknown in advance
work_items = yield ctx.call_activity(get_work_items)
# fan-out: schedule the work items in parallel and wait for all of them to complete
tasks = [ctx.call_activity(process_work_item, input=item) for item in work_items]
results = yield task.when_all(tasks)
# fan-in: summarize and return the results
return {'work_items': work_items, 'results': results, 'total': sum(results)}
You can find the full sample here.
An orchestration can wait for a user-defined event, such as a human approval event, before proceding to the next step. In addition, the orchestration can create a timer with an arbitrary duration that triggers some alternate action if the external event hasn't been received:
def purchase_order_workflow(ctx: task.OrchestrationContext, order: Order):
"""Orchestrator function that represents a purchase order workflow"""
# Orders under $1000 are auto-approved
if order.Cost < 1000:
return "Auto-approved"
# Orders of $1000 or more require manager approval
yield ctx.call_activity(send_approval_request, input=order)
# Approvals must be received within 24 hours or they will be canceled.
approval_event = ctx.wait_for_external_event("approval_received")
timeout_event = ctx.create_timer(timedelta(hours=24))
winner = yield task.when_any([approval_event, timeout_event])
if winner == timeout_event:
return "Canceled"
# The order was approved
yield ctx.call_activity(place_order, input=order)
approval_details = approval_event.get_result()
return f"Approved by '{approval_details.approver}'"
As an aside, you'll also notice that the example orchestration above works with custom business objects. Support for custom business objects includes support for custom classes, custom data classes, and named tuples. Serialization and deserialization of these objects is handled automatically by the SDK.
You can find the full sample here.
The following features are currently supported:
Orchestrations are implemented using ordinary Python functions that take an OrchestrationContext
as their first parameter. The OrchestrationContext
provides APIs for starting child orchestrations, scheduling activities, and waiting for external events, among other things. Orchestrations are fault-tolerant and durable, meaning that they can automatically recover from failures and rebuild their local execution state. Orchestrator functions must be deterministic, meaning that they must always produce the same output given the same input.
Activities are implemented using ordinary Python functions that take an ActivityContext
as their first parameter. Activity functions are scheduled by orchestrations and have at-least-once execution guarantees, meaning that they will be executed at least once but may be executed multiple times in the event of a transient failure. Activity functions are where the real "work" of any orchestration is done.
Orchestrations can schedule durable timers using the create_timer
API. These timers are durable, meaning that they will survive orchestrator restarts and will fire even if the orchestrator is not actively in memory. Durable timers can be of any duration, from milliseconds to months.
Orchestrations can start child orchestrations using the call_sub_orchestrator
API. Child orchestrations are useful for encapsulating complex logic and for breaking up large orchestrations into smaller, more manageable pieces.
Orchestrations can wait for external events using the wait_for_external_event
API. External events are useful for implementing human interaction patterns, such as waiting for a user to approve an order before continuing.
Orchestrations can be continued as new using the continue_as_new
API. This API allows an orchestration to restart itself from scratch, optionally with a new input.
Orchestrations can be suspended using the suspend_orchestration
client API and will remain suspended until resumed using the resume_orchestration
client API. A suspended orchestration will stop processing new events, but will continue to buffer any that happen to arrive until resumed, ensuring that no data is lost. An orchestration can also be terminated using the terminate_orchestration
client API. Terminated orchestrations will stop processing new events and will discard any buffered events.
Orchestrations can specify retry policies for activities and sub-orchestrations. These policies control how many times and how frequently an activity or sub-orchestration will be retried in the event of a transient error.
- Python 3.8
- A Durable Task-compatible sidecar, like Dapr Workflow
Installation is currently only supported from source. Ensure pip, setuptools, and wheel are up-to-date.
python3 -m pip install --upgrade pip setuptools wheel
To install this package from source, clone this repository and run the following command from the project root:
python3 -m pip install .
See the examples directory for a list of sample orchestrations and instructions on how to run them.
The following is more information about how to develop this project. Note that development commands require that make
is installed on your local machine. If you're using Windows, you can install make
using Chocolatey or use WSL.
Protobuf definitions are stored in the ./submodules/durabletask-proto directory, which is a submodule. To update the submodule, run the following command from the project root:
git submodule update --init
Once the submodule is available, the corresponding source code can be regenerated using the following command from the project root:
make gen-proto
Unit tests can be run using the following command from the project root. Unit tests don't require a sidecar process to be running.
make test-unit
The E2E (end-to-end) tests require a sidecar process to be running. You can use the Dapr sidecar for this or run a Durable Task test sidecar using the following docker
command:
docker run --name durabletask-sidecar -p 4001:4001 --env 'DURABLETASK_SIDECAR_LOGLEVEL=Debug' --rm cgillum/durabletask-sidecar:latest start --backend Emulator
To run the E2E tests, run the following command from the project root:
make test-e2e
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.