Skip to content
View higuseonhye's full-sized avatar

Block or report higuseonhye

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. agent-eval-toolkit agent-eval-toolkit Public

    Decision-oriented evaluation toolkit for LLM & agent systems, focusing on trust, failure modes, and deployment readiness in enterprise environments.

    Python

  2. spk_balance spk_balance Public

    Evaluation-oriented MVP exploring speaking–writing feedback loops for agent and LLM communication quality.

    Python

  3. agent-accountability-eval agent-accountability-eval Public

    An evidence-based system for evaluating agentic AI trustworthiness through accountability, continuous evaluation, and human-in-the-loop governance.

  4. worldsim-eval worldsim-eval Public

    Evaluate AI agents by simulating world-level consequences.