The Redfish Crawler is a python standalone cli tool for generating a folder structure with all of it's json responses for all available endpoint on a server's own Redfish implementation.
- iDRAC7,8 or newer
- Firmware version
2.60.60.60or higher - iDRAC administrative account
- Python >=
3.6 - requirements.txt
> git clone https://github.com/grafuls/redfish-crawler && cd redfish-crawler
> pip install -r requirements.txtNOTE:
- This will allow the crawler script execution via
./crawler.py
> git clone https://github.com/grafuls/redfish-crawler && cd redfish-crawler
> virtualenv .crawler_venv
> source .crawler_venv/bin/activate
> pip install -r requirements.txtNOTE:
- Both setup methods above can be used within a virtualenv
- After using crawler, the virtual environment can be deactivated running the
deactivatecommand
> ./crawler.py -u root -p {PASS} -H mgmt-server-r640.example.comNOTE:
- This will create a root directory with the server's shortname on the current working directory, which will include all endpoints directories and json output files.
- Some endpoints are blacklisted as data from those is deemed irrelevant and costly:
- "jsonschemas"
- "logservices"
- "secureboot"
- "lclog"
- "assembly"
- "metrics"
- "memorymetrics"
- "telemetryservice"
- "sessions"