The guarded RAG assistant is an easily customizable recipe for building a RAG-powered chatbot.
In addition to creating a hosted, shareable user interface, the guarded RAG assistant provides:
- Business logic and LLM based guardrails.
- A predictive secondary model that evaluates response quality.
- GenAI-focused custom metrics.
- DataRobot MLOps hosting, monitoring, and governing the individual backend deployments.
Warning
Application Templates are intended to be starting points that provide guidance on how to develop, serve, and maintain AI applications. They require a developer or data scientist to adapt, and modify them to business requirements before being put into production.
-
If
pulumi
is not already installed, install the CLI following instructions here. After installing for the first time, restart your terminal and run:pulumi login --local # omit --local to use Pulumi Cloud (requires separate account)
-
Clone the template repository.
git clone https://github.com/datarobot-community/guarded-rag-assistant.git cd guarded-rag-assistant
-
Rename the file
.env.template
to.env
in the root directory of the repo and populate your credentials.DATAROBOT_API_TOKEN=... DATAROBOT_ENDPOINT=... # e.g. https://app.datarobot.com/api/v2 OPENAI_API_KEY=... OPENAI_API_VERSION=... # e.g. 2024-02-01 OPENAI_API_BASE=... # e.g. https://your_org.openai.azure.com/ OPENAI_API_DEPLOYMENT_ID=... # e.g. gpt-4 PULUMI_CONFIG_PASSPHRASE=... # required, choose an alphanumeric passphrase to be used for encrypting pulumi config
-
In a terminal run:
python quickstart.py YOUR_PROJECT_NAME # Windows users may have to use `py` instead of `python`
Advanced users desiring control over virtual environment creation, dependency installation, environment variable setup
and pulumi
invocation see here.
App Templates transform your AI projects from notebooks to production-ready applications. Too often, getting models into production means rewriting code, juggling credentials, and coordinating with multiple tools & teams just to make simple changes. DataRobot's composable AI apps framework eliminates these bottlenecks, letting you spend more time experimenting with your ML and app logic and less time wrestling with plumbing and deployment.
- Start Building in Minutes: Deploy complete AI applications instantly, then customize AI logic or frontend independently - no architectural rewrites needed.
- Keep Working Your Way: Data scientists keep working in notebooks, developers in IDEs, and configs stay isolated - update any piece without breaking others.
- Iterate With Confidence: Make changes locally and deploy with confidence - spend less time writing and troubleshooting plumbing, more time improving your app.
Each template provides an end-to-end AI architecture, from raw inputs to deployed application, while remaining highly customizable for specific business requirements.
- Replace
assets/datarobot_english_documentation_docsassist.zip
with a new zip file containing .pdf, .docx, .md, or .txt documents (example alternative docs here). - Update the
rag_documents
setting ininfra/settings_main.py
to specify the local path to the new zip file. - Run
pulumi up
to update your stack.
- Modify your
.env
.GOOGLE_SERVICE_ACCOUNT='' # insert json service key between the single quotes, newlines are OK GOOGLE_REGION=... # default is 'us-west1'
- Update your environment and install
google-auth
.source set_env.sh # On windows use `set_env.bat` pip install google-auth
- Update the credential type to be provisioned in
infra/settings_llm_credential.py
.# credential = AzureOpenAICredentials() # credential.test() from docsassist.credentials import GoogleLLMCredentials credential = GoogleLLMCredentials() credential.test('gemini-1.5-flash-001') # select a model for validating the credential
- Configure a Gemini blueprint to be provisioned in
infra/settings_rag.py
.# llm_id=GlobalLLM.AZURE_OPENAI_GPT_3_5_TURBO, llm_id=GlobalLLM.GOOGLE_GEMINI_1_5_FLASH,
- Run
pulumi up
to update your stack.
- Edit
infra/settings_main.py
and updateapplication_type
toApplicationType.DIY
- Optionally, update
APP_LOCALE
indocsassist/i18n.py
to toggle the language. Supported locales include Japanese (ja_JP) in addition to the English default (en_US).
- Optionally, update
- Run
pulumi up
to update your stack with the example custom Streamlit frontend, - After provisioning the stack at least once, you can also edit and test the Streamlit
front-end locally using
streamlit run app.py
from thefrontend/
directory (don't forget to initialize your environment usingset_env
).
- Install additional requirements (e.g. FAISS, HuggingFace).
source set_env.sh # On windows use `set_env.bat` pip install -r requirements-extra.txt
- Edit
infra/settings_main.py
and updaterag_type
toRAGType.DIY
. - Run
pulumi up
to update your stack with the example custom RAG logic. - Edit
notebooks/build_rag.ipynb
to customize the doc chunking, vectorization logic. - Edit
deployment_diy_rag/custom.py
to customize the retrieval logic & LLM call. - Run
pulumi up
to update your stack.
- Log into the DataRobot application.
- Navigate to Registry > Applications.
- Navigate to the application you want to share, open the actions menu, and select Share from the dropdown.
pulumi down
For manual control over the setup process adapt the following steps for MacOS/Linux to your environent:
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
source set_env.sh
pulumi stack init YOUR_PROJECT_NAME
pulumi up
e.g. for Windows/conda/cmd.exe this would be:
conda create --prefix .venv pip
conda activate .\.venv
pip install -r requirements.txt
set_env.bat
pulumi stack init YOUR_PROJECT_NAME
pulumi up
For projects that will be maintained, DataRobot recommends forking the repo so upstream fixes and improvements can be merged in the future.