Responsible AI

Findem’s commitment to responsible AI

Artificial intelligence (AI) has been core to Findem since our founding in 2019. We are committed to the responsible and ethical use of AI to assist talent decision makers in making faster, better, and more fair decisions with data.
Why Findem?
Man with coffee smiling in front of computer

Responsible AI at Findem

Findem does not intend to replace the human decision maker with AI, but use AI to assist the people responsible for decision making. By automating as much of the IQ part of the talent process as possible, talent team members will have more time to focus on the EQ.

Talent teams that use AI to drive efficiency and productivity will become more strategic in their approach to building a diverse and high performing workforce. They will have more resources and opportunities for innovation and transformation.

How Findem uses AI

Findem is using AI to deeply transform workflows across the talent lifecycle. By enabling robust automation of manual, repetitive practices, Findem frees up time and resources for people to engage in more meaningful and innovative activities.

The introduction of large language models (LLMs) and generative AI makes it possible to access vast amounts of data and eliminates adoption barriers, providing both data and context to enable data driven decision making.

Findem is using a BI first strategy, building an AI assist infrastructure layered over a robust BI data platform. This strategy prioritizes the collection, analysis, and presentation of data to provide insight and support for decision making, then uses AI to learn, reason, and make predictions or recommendations with trusted outcomes in the following ways:

  • Findem generates new and unique data called “attributes” using machine learning to combine people and company data with time.
  • Findem uses a machine learning model to correctly associate social profiles across social media platforms and combine them along with other, relevant people and company data into enriched profiles.
  • Findem uses AI for diversity classifications for gender and ethnicity. We build balanced talent pools using a probabilistic model rather than a deterministic one. A team of researchers curate and validate classifications continuously.
  • Findem uses LLMs to understand the intent of a platform user’s request.
  • Findem uses generative AI through embedded prompts within the platform’s workflows. These prompts leverage Findem’s data and create a vector to share it with GPT 4.0, preventing the data from becoming public.

Essential to the AI assist infrastructure is human oversight and control, with ongoing monitoring and assessment processes that are explainable and auditable.

  • Findem has built-in guardrails to protect against incorrect deductions and hallucinations where the LLM fills in missing gaps.
  • Findem’s AI uses verifiable facts that we call “attributes” as the basis for reasoning and insights with human validation and verification.
  • Findem has been intentionally designed to not make decisions on behalf of any persona, but to assist in making the right decision.
  • Findem has built-in response moderation with both a human in the loop as well as checks and balances in the infrastructure.

Findem complies with pertinent legal and regulatory frameworks concerning security and data privacy. Findem creates a robust middleware that serves as a bridge between public LLMs and Findem services. The middleware leverages LLMs for its true strength - capturing and interpreting the intent. The context, not the data, is transmitted to the public LLM. Findem employs a process of vectorizing and anonymizing all inputs, thereby safeguarding privacy by avoiding the sharing of personally identifiable information (PII) with public LLMs. This approach guarantees the protection of user data and maintains a high level of privacy.

  • AI is never used to make a subjective evaluation of a person. Findem is a searching and matching platform, not a candidate evaluation platform.
  • Findem does not automatically advance or reject applicants.
  • Findem does not use AI for searching and matching. These capabilities are BI (query) based.
  • Because AI is not used for searching and matching, it should mitigate bias or discrimination concerns.

Findem’s AI design principles

From our founding, Findem has prioritized security, privacy, and robustness. These are the AI design principles that support our infrastructure:

  • Findem prioritizes human-centered design with workflows, automations, and features to augment human capabilities with AI assistance.
  • Findem prioritizes robustness and safety by designing to minimize data risk, maximize data availability, and maintain data integrity.
  • Findem supports decision-making processes with transparency and explainability built into workflows, dashboards, and planning tools.
  • Findem promotes fairness and avoidance of bias by taking subjectivity out of the search process, using attributes and enriched profiles instead of keywords and resumes.

Intention is only valid with accountability and Findem is in compliance with current legal and regulatory frameworks with an up-to-date privacy policy and data security and trust policies.

Consent and Control

Chatbot responses may not always be accurate; please verify important details and consult reliable sources.

The do not sell or share my personal information form on our public website allows anyone to update or remove their information from Findem’s Talent Data Cloud.

For questions about this document, please contact [email protected]. Individual outcomes vary by customer, Findem is not responsible for liability, loss, or damage related to the use of this document.

Start with the warmest leads

Turn your talent acquisition strategy inside out with Findem
Request a demo