This tool aims to assist non-experts in considering Responsible AI concerns for their ML systems. The tool produces severe, surprising, concrete, relevant, and diverse scenarios within the context of the user’s system to engage the users. The scenarios represent how stakeholders can be potentially harmed by the system’s output.
Clone the repository:
$ git clone https://github.com/yanxinc/collab-rai-tool.git
$ cd collab-rai-toolCreate and activate environment; Install dependencies
$ python -m venv .venv
$ source .venv/bin/activate
$ pip install -r app/backend/requirements.txt
$ pip install -r app/frontend/requirements.txtNavigate to the collab-rai-tool/app/helpers folder and create a new file called cred.py with your OpenAI key
KEY = '{your_openai_key}'Navigate to the collab-rai-tool/app/backend folder and run uvicorn main:app --port 8502 --reload to start the backend server
Navigate back to the collab-rai-tool/ folder and then start the Streamlit frontend
- Full Version
$ streamlit run app/frontend/app.py- User Testing Version 1 (with Fairness Goal 2 before Fairness Goal 3)
$ streamlit run app/frontend_test_version/app.py`- User Testing Version 2 (with Fairness Goal 3 before Fairness Goal 2)
$ streamlit run app/frontend_test_version2/app.pyIf this opens our Collab RAI Impact Assessment app in your browser, you're all set!
Prerequisites
- Install Docker Engine
- Check network port accessibility
- Make sure you can access ports 8501-8504 as these are the default ports that our app uses
- Port 8501: Full Version of App
- Port 8502: Backend pipeline service
- Port 8503: User Testing Version 1
- Port 8504: User Testing Version 2
- If these ports are unavailable, change them in the
Dockerfileand thedocker-compose.ymlfiles
- Make sure you can access ports 8501-8504 as these are the default ports that our app uses
Start deploying by running the following command
$ docker-compose up -d --build --remove-orphansNote:
- Currently deployed at feature.isri.cmu.edu
- E.g. Visit http://feature.isri.cmu.edu:8503/ to access User Testing Version 1 of the app
- Files are located at
/home/cindy/collab-rai-tool
Go to http://feature.isri.cmu.edu:8503/ or http://feature.isri.cmu.edu:8504/ in your browser to access the RAI Impact Assessment
Go to http://feature.isri.cmu.edu:8502/logs in your browser to view the log messages. Note that the logs are displayed in reverse line order.
To change the logging level, navigate to the collab-rai-tool/app/backend/pipeline.py file and change the level in logging.basicConfig. Options include NOTSET, DEBUG, INFO, WARN, ERROR, and CRITICAL. Then, rebuild the app.
I am building a system that provides movie recommendations to users based on their watching history and ratings data. The system can receive recommendation requests and needs to reply with a list of recommended movies. The purpose of this system is to suggest movies to users to allow for better user experience. The users (movie watchers) would be able to receive more personalized recommendations. The AI / ML model uses collaborative filtering algorithms to accumulate and learn from users' past evaluations of movies to approximate ratings of unrated movies and then give recommendations based on these estimates. An user story is As a movie watcher, I want to request for personalized recommendations based on my interest and previous watch history, so that I can discover new films that match my preferences and enhance my viewing experience.
Sample stakeholder list: ['Movie Watchers', 'Movie Producers and Distributors', 'Advertisers', 'Streaming Services']
