Preliminary Report: Making AI Systems Accountable Through A Transparent, Grassroots-Based Consultation Process
Preliminary Report: Making AI Systems Accountable Through A Transparent, Grassroots-Based Consultation Process
1
1. Overview & Rationale
s the Philippines' top digital news and investigative organization, Rappler is at the
A
forefront in leveraging the use of digital technologies to scale the reach of public interest
journalism while promoting civic engagement that’s critical to addressing society's most
urgent problems.
hen OpenAI called for “experiments in setting up a democratic process for deciding on
W
rules AI systems should follow within the bounds of law,” Rappler saw the need to
participate in order to ensure that Filipino voices will be heard in determining policy for
large language models.
1
ttps://www.rappler.com/moveph/39377-introducing-project-agos/
h
2
https://www.rappler.com/nation/148007-propaganda-war-weaponizing-internet/
3
https://www.rappler.com/world/global-affairs/202943-global-south-demands-facebook-parity-transparency-accountability/
4
https://www.rappler.com/nation/factsfirstph-wins-most-impactful-collaboration-global-fact-9-june-2022/
2
hile online consultation processes are scalable, it is important to recognize lessons
W
learned from how online mobs and disinformation networks successfully hacked online
civic spaces – and consequently democracies around the world – by manipulating public
opinion. In such instances, it is possible that even surveys may in fact be merely
measuring the impact of systemic manipulation rather than serving as genuine
mechanisms for gathering democratic inputs.5
In a global setting, there is also a need to account for geopolitical, cultural, economic,
technological, and linguistic diversity. Some cases would call for going beyond the usual
processes of eliciting critical contextual information that’s crucial to make informed policy
decisions.
he process we suggested had multiple layers and forms in order to ensure that
T
grassroots concerns are not lost in the process of generating overall consensus. We also
wanted to illustrate that different forms of consultation may yield unique nuances which
could be integral to shaping policy.
he process we designed primarily leverages the capacity of large language models to
T
generate both qualitative and quantitative insights from various types of unstructured
inputs (text and audio) from participants in a consultation process that combines the
quantitative nature of survey research with the qualitative depth of insights from focus
groups.6
ll in all, the 15 sessions generated a total of 95 initial policy ideas which then need to
A
undergo a refinement process.
5
Pauline Macaraeg, "Study finds signs of ‘networked political manipulation’ on social media," February 2, 2022,
https://www.rappler.com/nation/elections/study-finds-sign-networked-political-manipulation-social-media-digital-public-pulse-project/
6
Andreas Dengel, Rupert Gehrlein, David Fernes, Sebastian Görlich, Jonas Maurer, Hai Hoang Pham, Gabriel Großmann and
Niklas Dietrich Genannt Eisermann, “Qualitative Research Methods for Large Language Models: Conducting Semi-Structured
Interviews with ChatGPT and BARD on Computer Science Education,”https://www.mdpi.com/2227-9709/10/4/78,12 October 2023
7
Any ideas on rules that should govern AI? Tell Rai, Rappler’s AI moderator, SEP 16, 2023,
https://www.rappler.com/technology/rappler-launches-ai-moderator-rai-2023/
3
In this initial report, we provide evidence that by leveraging the capacity of large
language models to process and synthesize inputs in audio and text formats, it is
possible to:
● conduct consultations using multiple focus groups online and onground in
order to get a sense of the views of various diverse stakeholders in
relation to a specific issue; and
● generate grassroots level policy ideas that represent the views of those
specific stakeholders and cohorts.
he report explains how the entire process was executed. These are preliminary
T
findings. We will update this report when we finish processing all the inputs.
.
1 aria Ressa - Adviser
M
2. Gemma B. Mendoza
3. Don Kevin Hapal
4. Gilian Uy
5. Ogoy San Juan
6. Hamilton Chua
It is difficult to truly come up with a mechanism for consultation that is fully representative
of the public in a purely online process, especially in countries like the Philippines where
internet and mobile data access remains limited.
t the start of 2023, internet penetration in the country stood at 73.1%, according to
A
WeAreSocial. This means that over a quarter of the population as of January 2023 were
still not connected to the World Wide Web.8
elf-reported online demographic data is often unreliable, as WeAreSocial pointed out in
S
a 2017 article.9 More recent studies showed fraudulent behavior within platforms like
8
ttps://datareportal.com/reports/digital-2023-philippines
h
9
https://wearesocial.com/uk/blog/2017/08/startling-truths-about-facebook/
4
Mechanical Turk which threaten the reliability of data generated by these platforms.10
ow different people engage with others also varies in degrees and quality.Some are
H
inclined to create content or comment while others would be content to just vote or post
ratings. This is illustrated in various research on Social Technographics conducted by
Forrester market research group.11
Unlike on-ground survey respondents, online survey respondents tend to be prone to the
10
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3233954
11
ttps://www.forrester.com/blogs/10-06-25-the_data_digest_the_social_technographics_profile_of_facebook_and_myspace_users_
h
us/
5
“ self-selection effect.”12 This leads to the probability that survey outcomes may be
influenced by those most opinionated about a subject matter.
wo surveys were conducted: (a) Pop-up survey on the Rappler website. The survey
T
randomly pops to website users in order to avoid gaming; and (b) A link survey
distributed to target communities. This was done by getting research partners and
targeted groups to respond to a survey link.
part from data on participant demographics and digital media consumption, Rappler
A
used these surveys to get initial participant views on various AI-related traits.
t the end of the survey, respondents were asked if they were willing to participate in
A
further conversations around the issue. Those who signified intent were asked to put
their contact information.
hile surveys have the potential to scale, they are clearly not the most appropriate
W
mechanisms for surfacing novel ideas due to the limitations of what can be structurally
accommodated in typical survey questionnaires.
his is where Focus Groups become valuable. In market research, FGDs, while smaller
T
in sample size, provide venues for open discussions and for sharing and
cross-pollinating ideas. They also serve as excellent mechanisms for uncovering views
that otherwise would have been lost in a larger group.
ne limitation of FGDs, however, is that they are rarely large enough to draw definitive
O
conclusions from. The qualitative data they produce is unstructured and needs to be
coded to organize data and interpret meaningful results.
his is where an AI-assisted FGD-like chat mechanism, which can simultaneously do
T
quantitative and qualitative analysis, becomes useful.
s part of this component, Rappler developed aiDialogue, a chat room where it
A
prompted ChatGPT to assume the persona ofRai, anFGD moderator seeking
answers from the public to this vital question:Howshould AI be governed?
12
Jelke Bethlehem, "Selection bias in web surveys," International Statistical Review, Vol. 78, No. 2
( August 2010), pp. 161-188 (28 pages), Published By: International Statistical Institute (ISI)
https://www.jstor.org/stable/27919830
6
here are currently two implementations of this chat room: (i) the main chat
T
room, accessible through aidialogue.rappler.com, where anybody can create an
account and add their inputs anytime, and (ii) the session-based chat rooms
which are specifically deployed to cater to select cohorts.
ware that some participants may not necessarily be comfortable with talking
A
about their opinions and concerns about AI with an AI moderator, Rappler
conducted Community Dialogue sessions where the sessions on aiDialogue
were complemented with a follow-up human-moderated FGD.
wo were done on ground while the rest were conducted online through a video
T
conferencing service.
3.3.1. Text inputs via aiDialogue - For the first hour, participants were
7
sked to log in to an online chat service moderated by ChatGPT, where
a
they responded directly to questions from Rai, the AI-moderator of
aiDialogue. Rai would ask questions concerning how they feel about AI
and policies that they think should govern the use of AI.
s in the AI-only sessions, the system also generates a set of policies based on
A
transcribed audio recordings of the inputs from participants from that specific
session. Participants were then asked to vote on those policies.
In theory, having ideas generated at the level of small private groups with shared
demographics could help build confidence in the system, as the smaller sample
size makes it easier to trace how the policy ideas are linked to the actual
participant inputs. This builds credibility for the process.
human moderating a focus group has the advantage of being able to build
A
rapport with members of the group. There is also the additional advantage of
being able to benefit from non-verbal communication cues such as expressions
and body language.
n the other hand, the chat-based discussion could have the potential to scale
O
up the conversation, with all participants being able to respond in parallel. It may
be the preferred option for those who prefer articulating their views in the written
form.
13
ttps://podcast.adobe.com/enhance#
h
14
https://openai.com/research/whisper
8
4. Participant Recruitment & Demographics
s far as the geographic spread is concerned, it appears that the survey was able to get
A
respondents from all 17 Philippine regions, with higher representation from some highly
populated and more internet-connected regions.
In terms of age, the age group 40 years old and above is over-represented, accounting
for a total of1,115 (32.54%) respondents, while thoseaged 39 and below were only 432
(12.61% of total respondents).Over half of the respondentschose not to respond to
various demographic questions.
In terms of the level of education, 25.65% indicated that they are college level, 20.46%
declared themselves to be post-graduate level, 6.89% indicated that they were high
school level and below while 47.01% did not fill out the field for educational level
9
chieved.It must be noted that this spread is not representative of educational levels in
a
the Philippine population. This skew is likely due to the fact that the older age groups are
over-represented.
In terms of gender, 25.09% indicated they were male, 20.63% female, while 2.25 chose
“other.” 52.03% either did not fill up this field or indicated that they would rather not
specify their gender.
hen asked if they were interested in further conversations about the topic, 847 said
W
they were but only 558 gave contact details. Those who gave contact information were
then invited to participate in aiDialogue sessions. A number have participated, but not
everyone has done so yet.
o complement the website survey, Rappler circulated survey links through partner
T
research groups and targeted cohorts. A total of 359 participated in this survey, of which
a big majority – 326 – responded to AI-related questions.
ounger respondents had more representation in this survey, with 62.58% of
Y
respondents coming from age groups 39 and below and 25.77 % of respondents coming
from age groups 40 and above. Of the total respondents, 11.35 % still did not indicate
their age group.
he community survey skewed more to female respondents, which accounted for 51.53
T
% of all community survey respondents. At least 28.75 % identified themselves as male
10
hile the rest either identified their gender as "other," or did not indicate their gender at
w
all.
urvey participants who signified their interest in further conversations around the topic
S
served as the initial pool of participants who were recruited to the aiDialogue sessions.
This was supplemented through targeted recruitment of participants from key sectors.
otal aiDialogue participants included 186 who identified themselves as female, 149 who
T
identified themselves as male. The rest did not indicate their gender.
In terms of age, 204 were 25 years old and below; 85 were between 25 to 40 years old,
while 41 were above 40 years old
ot everyone was really able to submit responses. A total of 197 participants submitted
N
at least one response to these sessions. Participants would typically choose the
questions they responded to. Only 49 participants responded to all questions asked
during their respective sessions.
esults of the survey indicate a high degree of concern over the potential of generative
R
AI technologies to cause disinformation, with over 80% of respondents either agreeing or
strongly agreeing with the following statements:
● " I'm uncomfortable with AI-generated deep fake videos that could be used to fool
people."
● "I'm concerned that people might blindly trust AI-generated information without
evaluating their accuracy.""
11
fter disinformation, the next big concern is privacy, with 79.35% of respondents
A
agreeing with the statement, "I am concerned about my privacy and don't want platforms
collecting data on things I watch or engage with."
his is followed by jobs, with 70.27% of respondents agreeing with the statement, “I am
T
wary of the potential impact of generative AI on the livelihood of artists and creative
professionals.”
← Agree →
← Disagree →
← Agree →
I am concerned
about my
privacy and
don't want
platforms
collecting data
on things I watch
or engage with.
← Disagree →
12
e also noted that more than half of those who identified themselves as having negative
W
concerns about AI technologies also indicated that they have not used these
technologies.
owever, it is also worthy to note that the number of respondents who agreed with
H
negative issues relating to AI technologies who have not used these technologies are
mostly proportionate to those who have used these technologies. Example below.
trongly
S Disagree Neutral Agree trongly
S Total
Disagree Agree
Yes 124 (12.2%) 58 (5.7%) 131 (12.9%) 180 (17.7%) 523 (51.5%) 1016 (100%)
o you use
D hat is
W 65 (15.8%) 42 (10.2%) 57 (13.8%) 80 (19.4%) 168 (40.7%) 412 (100%)
artificial AI
intelligence-powe
red platforms and No 92 (6.4%) 47 (3.3%) 176 (12.3%) 309 (21.5%) 812 (56.5%) 1436 (100%)
tools?
Total 281 (9.8%) 147 (5.1%) 364 (12.7%) 569 (19.9%) 503
1 2864 (100%)
(52.5%)
ompared to those who either expressly declared that they do not use AI and those who
C
declared that they have used AI tools, those who do not know what AI is showed lower
apprehension about it.
owards the end of every aiDialogue session, a prompt generated a list of the policy
T
ideas that surfaced based on summaries of responses to various questions answered by
the cohort. Participants were asked to upvote or downvote each policy idea.
total of 95 individual policy ideas were generated.For example, below is a list of
A
policies generated at the end of one session.
● M andatory classes on AI, including topics such as citing sources, ethical AI use,
and effective AI use in different applications, should be introduced under
computer studies classes.
● Deep learning models used for AI should be included in the curriculum.
● A law should be established to hold AI creators accountable for their models.
13
● U sers should always fact-check responses given by AI and ask for the sources of
information.
● Users should not give away personal information to AI.
● AI creators should limit the access of AI to government sites that may contain
private information of individuals.
● AI creators should provide a list of all URLs the AI used to create its response
and give disclaimers if the facts provided came from 'unofficial' information sites.
● AI should always cite its sources and provide disclaimers.
● AI should only use verified and trusted sources and should have a mechanism to
filter out outdated and incorrect information.
● Developers should create a bank of registered news websites and internationally
recognized organizations for information that may be considered credible.
ome of these ideas cover overlapping themes and could either be merged or could
S
benefit from further refinement as they may lack specific details an enforceable policy
would need. For instance, these three items could be merged.
● A I creators should provide a list of all URLs the AI used to create its response
and give disclaimers if the facts provided came from “unofficial” information sites.
● AI should always cite its sources and provide disclaimers.
● AI should only use verified and trusted sources and should have a mechanism to
filter out outdated and incorrect information.
pvotes received by most of the policy ideas indicate buy-in from participants of policies
U
generated from the conversation.
verall, the policy ideas generated 981 upvotes and 32 downvotes or a 0.98 silhouette
O
score.15 Traditionally used for evaluating clustering, we use the silhouette score as a
proxy for alignment because it gives a score of 0 when participants are split evenly and a
score of 1 when participants are 100% homogenous in their decision. The silhouette
𝑚𝑎𝑥(𝑎,𝑏)−
𝑚
𝑖𝑛(𝑎,𝑏)
score is computed using the formula 𝑚𝑎𝑥(𝑎,𝑏)
.
Participants were also generally impressed by the way the AI summarized their inputs.
It also appears possible to use transcripts of the human-moderated focus groups in
order to generate policy ideas by leveraging the capabilities of generative artificial
intelligence models.
he team attempted to do this by initially generating the transcripts from the audio
T
recordings of the sessions. GPT was then prompted to generate policy ideas from
15
https://scikit-learn.org/stable/modules/clustering.html#silhouette-coefficient
14
articipant concerns. Below are samples of the ideas generated, along with statements
p
from the conversations that support such policy ideas.
his is a summary based on the statements provided earlier. It goes without saying,
T
though, that real policy creation would involve a more thorough and iterative process to
ensure that all stakeholders' concerns are addressed.
olicy 1: ChatGPT must always be transparent about its origins, processes, and
P
limitations.
olicy 2: When discussing politics, ChatGPT should aim for global neutrality and
P
minimize inherent biases.
olicy 3: ChatGPT should offer a mechanism for users to provide feedback and further
P
improve its accuracy and neutrality.
olicy 4: ChatGPT must ensure users are aware of its potential biases and always
P
encourage critical thinking.
15
he challenge that the team ran into is in consolidating inputs from the various cohorts
T
into general constitutional statements while still making it possible to trace the origin of
the final policy ideas.
ur source data is composed of all summary statements across all cohorts. Each
O
statement is labeled with its session name and a unique number, e.g.
aidialogue1015_m2_4]
[ The users believe that ChatGPT
should
be able to share information about public figures and
influencers, as long as the information is publicly
available and does not include personally identifiable
information (PII). However, it should also be careful to
avoid making misleading statements about them.
he summary statements are shuffled then divided into 12 batches of roughly 40
T
statements each.
The rest of the batches are iteratively fed into the policy with the following prompt:
16
ncorporate the following statements into the existing
I
policies by either:
1. Adding their labels to the contributing / contrasting
statements
2. Making a note for nuance changes in an existing policy
3. Creating a new policy
he intention is to repeat this process 10 times, shuffling the data before splitting
T
into batches. We then compare all 10 versions for similarities in both policy rules
and attribution.
he team explored using Claude16 to generate policyideas based on participant inputs.
T
We attempted to generate constitutional policies based on all the responses gathered
from the AI-moderated sessions. The goal was to try to map which session or group
contributed to the formation of a policy. Hoping to ensure traceability of each cohort’s
inputs to a policy.
laude was identified to have a large enough context window to process all the
C
responses. We labeled each response and asked Claude to generate policies straight
from the user inputs. This is in contrast with the per session policies that were generated
from summaries which were already processed by an LLM.
16
https://aws.amazon.com/bedrock/claude/
17
ormat of the responses
F
[group name][user name]: response
he output, however, only yielded very general summaries. This was the output from
T
Claude:
“ Thank you all for the thoughtful discussion on governing AI responsibly. Here are
some key points that emerged:
- Education is crucial - both in teaching the public how to use AI responsibly, and
training AI developers on ethical design. This could be incorporated in school
curricula.
- Data governance policies should uphold privacy and security and prevent
misuse. Consent and attribution are important when using data to train AI
models.
- AI literacy among the public will empower people to make informed choices.
Multi-stakeholder collaboration is key in developing standards and governance
frameworks.
18
hile we were able to send all the responses for processing, the results were general
W
summaries of what people discussed, with many details sacrificed for brevity. The
session generated policies were more exhaustive and some ideas that were identified in
the sessions were missed.
he goal of tracing the groups’ contributions in the forming of a policy was still not
T
achieved. It could be possible to iterate further with the prompt but there’s the issue of
scalability when more groups are added.
5.3.1. Inclusiveness
nder this key performance indicator, we wanted to be able to account for gaps in
U
information and technology access while accounting for minority perspectives. The way
we tried to solve this challenge is by providing for various mechanisms for consultation:
AI-moderated text-based chat (via aiDialogue), human-moderated sessions online and
onground.
e wanted to know if the level of participation of an individual could differ depending on
W
the medium. We did this by comparing verbal and written outputs of participants.
19
ohorts that had fewer participants are expected to have points that go further along the
C
diagonal. Participants from the Region 8 cohort, for instance, displayed a higher level of
preference for either written or verbal. Some participants answered only a few questions
and succinctly at that, but had above average airtime.
llowing for additional mechanisms for consultation helped address technology gaps
A
among people who might not be comfortable with tech due to age or socio-economic
class, those who may have older devices, as well as other technical issues that make
participation in a purely online process difficult.
inority perspectives, on the other hand, are addressed by spinning off special sessions
M
for specific sectors so that sectoral views are not be drowned in generalized
conversations.
s part of this project, the team was able to gain perspectives from the following sectors:
A
journalists, activists, data scientists, law practitioners, students, communication
educators, religious workers, and government workers.
It would be easy enough to spin off additional sessions for other target sectors or
communities. This responds to our concern over the need to still surface local and
cultural nuances as well as geopolitical and socio-cultural contexts as opposed to
“universal” viewpoints.
20
hile this could skew insights to target groups covered so far, a representative view can
W
still be generated through a survey or referendum. The one we did for this project was
also able to get views from a good geographic spread within the country, across age
groups, and genders.
5.3.3. Scalability
ffline or on-ground human-moderated FGDs are clearly not that scalable considering
O
the logistical requirements required in such initiatives, and the limitations they encounter.
owever, the team has illustrated that this might be necessary in certain situations. The
H
team has also demonstrated that it is feasible to generate policy ideas from participant
inputs from these offline consultations by processing transcripts of the same using a
large language model. This opens an opportunity to still integrate these offline
mechanisms into the overall policy ideas gathering process.
ote that the scalability of offline mechanisms hinges on the quality of the transcription
N
models in the local language and accent. Transcriptions in English only need a passing
run to check for accuracy, and thus could be listened to and verified at 2x speed. Local
languages had more mistakes which could change the meaning of the message and
thus the policies that could be extracted. Manual correction for these transcripts took
twice the length of the time as the actual dialogue. Some examples of critical
mistranscription are below:
Did it hurt ordid the problem just disappear? id it hurt or mas lumala ba ang problema?
D
…did the problem get worse
n the other hand, the scalability of the online process (AI-moderated FGD) is limited
O
only by the capacity to initiate a session. Keeping participant numbers at a manageable
level helps us work around the current context of window limitations for large language
models. Generating policies at the cohort level has inherent traceability benefits that
contribute to the other points outlined above.
Initially, at the end of a session, we used all the responses to all the questions in
generating the prompt for policy creation. We ran into issues with the token limits when
participants were sending more comprehensive responses. While it would have been
21
ideal to have no intermediate steps between input (user response) and output (policies),
we needed to use the response summaries per question instead.
5.3.4. Integrity
ecause of Rappler’s experience with trolling on social media, the team was particularly
B
concerned about securing the system and mitigating possible abuse. One risk with
online consultations systems is the possibility of astroturfing. This risk was mitigated in
two ways: a. by requiring user authentication on the publicly available main session, (b)
per cohort sessions were only released to verified cohort participants.
equiring participants to divulge their identities, however, could prevent them from being
R
more candid and speaking their minds. We balanced this requirement of authentication
with the need to encourage participants to speak freely by assigning a randomly
generated username to each participant. These measures seem to have worked since
the team did not observe any problems with unverified accounts participating in the
online focused group discussions via aiDialogue.
erification was not possible for the pop-up survey. However, the risk of people
V
repeatedly responding in a way that could skew the results is minimized because the
publicly available survey mechanism used randomly pops-up to users, thereby avoiding
repeat responses by the same participant.
he other critical concern in relation to process integrity is the need for transparency so
T
that there would be buy in for the process. For this to happen, the team recognized the
need to ensure that it’s possible to trace policy ideas generated to participant inputs.
iven challenges in relation to policy refinement using AI tools, the team realized that an
G
additional step is needed in order to come up with more meaningful and enforceable
policies.
22
his requires bringing in human experts in the policy refinement process. In effect, the
T
aiDialogue process becomes the mechanism through which new policy ideas around
specific themes can be generated at grassroots level.
t this level, the process benefits from the capability of large language models' capability
A
to synthesize, generate text, and extrapolate preferences from participant inputs.
hen it comes to the policy refinement process, we recommend that human experts be
W
brought in the loop but that mechanisms should ensure that it is possible to document
action done to each of the enrolled policies from the cohort discussions. The humans
can act like a Technical Working Group (TWG) representing a range of viewpoints and
expertise.
The TWG can help with addressing issues of enforceability of enrolled policy ideas.
Ideally, a unique ID and related metadata should be attached to each of the policy ideas.
If the TWG determines that it makes sense to consolidate or merge some of the policy
ideas, it should be possible to identify what policy ideas were substituted, similar to the
way bills are consolidated and substituted in a legislative process.
ollowing policy refinement by the technical working group, it should be possible to then
F
submit the proposed policies for referendum to either the original participants or even to
a pool of respondents deemed representative of the population.
his experiment is still ongoing as we are still processing additional data. We also intend
T
to run further consultations in other locations within the country in the coming weeks.
23
his report will be updated when those consultations, the technical working group, as
T
well as the policy referendum are done.
It is also worth noting here that while this experiment was conducted for the purpose of
gathering ideas on rules of behavior for the use and development of artificial intelligence
systems, the same process and mechanisms can also be applied to consultations on
other policy questions.
24
aid1015-P3 evelop data privacy laws to protect personal data and prevent misuse
D
before the creation of AI tools.
aid1015-P4 nsure inclusive stakeholder consultation in AI governance, involving
E
governments, organizations, and individuals.
aid1015-P5 andate the inclusion of sources in AI responses to allow users to verify
M
the information provided.
aid1015-P6 Implement a collaborative approach to data protection involving
governments, individuals, and organizations.
aid1015-P7 nsure government accountability in AI governance, holding them
E
responsible for the increasing role of AI in everyday lives.
aid1015-P8 romote the principles of fairness, ethics, accountability, and
P
transparency in AI development and use.
aid1015-P9 romote public education on the potential harms and benefits of AI
P
technologies, such as deepfakes.
aid1015-P10 mphasize the role of the academic sector in educating the public on the
E
societal implications of AI, beyond its technological aspects.
aid1015-P11 ncourage awareness and education in AI governance, empowering
E
individuals to make responsible choices in their digital actions.
aid1015-P12 Highlight the importance of users being aware of the limitations of AI.
car-P1 nsure the accuracy and truthfulness of the reference data used in AI
E
models through stringent identification and validation protocols.
car-P2 stablish data ownership and obtain access permission before use by
E
the AI models.
car-P3 Implement the ability in AI models to filter the information they generate
to maintain the accuracy and truthfulness of the output.
car-P4 uild strict and stringent security and privacy rules at the code level to
B
safeguard AI models against potential cyber hacking threats.
car-P5 nsure that AI models' responses are based on publicly available or
E
disclosed data, and can discern private data that should not be divulged.
car-P6 nsure that AI models' responses are factual, particularly when dealing
E
with public figures and government officials.
car-P7 rogram AI models to understand and respect the uniqueness of each
P
individual, regardless of whether they are private individuals, public
figures, or government officials.
car-P8 Implement filters and safeguards in AI technologies to align their
behavior with human values and principles of human rights.
car-P9 bserve and recognize the governing laws of the respective
O
states/countries, and the Constitution, in the development and use of AI
technologies.
25
car-P10 dvocate for a systems approach to AI research and development to
A
account for other emerging technologies and promote human
participation.
car-P11 rioritize awareness of privacy issues, factual vs. imagined content, and
P
data accuracy concerns among both users and creators of AI models.
car-P12 nsure that AI models handle sensitive topics using statements from the
E
source, and comply with requests provided they are not inflammatory or
dangerous.
car-P13 nsure that AI models stick to factual information when responding to
E
questions about politics, laws, and the government.
car-P14 nsure that AI models provide a neutral representation of subjective
E
topics by providing different alternative versions available and indicating
which individual or group has taken said 'position'.
car-P15 void politically biased responses in AI models and ensure adherence to
A
community standards and policies.
cal-P1 I systems should restrict access to sensitive personal data, including
A
racial and ethnic origin, political ideas, religious or intellectual
convictions, genetic, biometric, and health data.
cal-P2 ny data that could potentially be linked to financial accounts, such as
A
email addresses, should be restricted by AI systems to prevent
unauthorized access and potential financial fraud.
cal-P3 I systems should require consent and verification before accessing or
A
using biometric data, ensuring the privacy and security of individuals'
unique biological traits.
cal-P4 I should not freely give out information that could compromise any
A
individual, regardless of their societal stature.
cal-P5 I governance should take into account the privacy of individuals who
A
are not public figures or government entities.
cal-P6 sers should be aware of what data they are sharing and which ones
U
may compromise them.
cal-P7 I should be used for the betterment of society and strive for constant
A
improvement in their usage.
cal-P8 I should be used as a tool for self-improvement rather than a
A
replacement for human skills and talents.
cal-P9 sers should be limited, blocked, or banned from accessing sensitive
U
information against guidelines.
cal-P10 I should use filters to determine which questions to answer, maintaining
A
the objectivity and accuracy of the responses.
cal-P11 efore generating models from an individual's data, the person's consent
B
should be sought out.
26
cal-P12 I developers should work with the government and ensure that while AI
A
generators are created and continuously developed, the same attention
should be focused on developing applications that will detect plagiarism
and the like.
cal-P13 here should be stricter guidelines or regulations on the requirements for
T
AI creators, including provisions of data privacy law to ensure the
protection of user data.
cal-P14 n AI data regulatory board should be established to provide a
A
centralized authority to oversee the governance of AI.
cal-P15 I should be governed by a human, a personnel or an agency from the
A
government to prevent it from causing harmful effects.
r8-P1 I developers should consider potential risks and consequences of their
A
models.
r8-P2 I should be designed to be a partner, not a substitute, for human tasks
A
to prevent job displacement.
r8-P3 I should adhere to ethical guidelines, such as principles of honesty,
A
fairness, and impartiality.
r8-P4 I should be trained to recognize discriminatory language and provide
A
inclusive responses to all users regardless of their gender, race, or
religion.
r8-P5 I developers should continuously improve AI models, listen to public
A
feedback, and collaborate with experts.
r8-P6 sers should give explicit consent before their data is accessed and
U
used, and they should have control over what data is being collected and
processed.
r8-P7 rganizations collecting data must have strong cybersecurity policies,
O
including regular security audits and best cybersecurity practices.
r8-P8 elling or using users' data for discriminatory purposes or to manipulate
S
users should be prohibited.
r8-P9 AI should adhere to existing laws and regulations related to data privacy.
r8-P10 evelopers should create a form of user's consent to determine whether
D
the user's information is for public consumption or not.
r8-P11 AI applications should be regulated, especially in academic settings.
r8-P12 awmakers should create legislations to prevent security breaches and
L
to protect the workforce.
r8-P13 Invest in capacity development programs focusing on skills and roles
that AI cannot easily replace.
r8-P14 trengthen media literacy programs, promote fact-checking, and create
S
an AI model specifically trained in determining AI-generated content.
r8-P15 sers should be educated about the risks and benefits of AI, and
U
creators should educate users about AI's capabilities.
27
act2-P1 ducate both creators and users of AI on its responsible use, including
E
understanding basic human ethical standards and integrating AI use in
digital literacy and citizenship courses.
act2-P2 sers should verify the information provided by AI technologies and treat
U
it as a starting point for further research or study.
act2-P3 sers should understand how AI works and what the potential risks and
U
benefits are before using AI technologies.
act2-P4 stablish a working regulatory framework and a set of guiding principles
E
for AI use.
act2-P5 Implement a mechanism for the community to report harmful acts by AI
to the tech company and developers.
act2-P6 Involve diverse stakeholders in the development of AI models.
act2-P7 overnment should play a key role in AI governance, including crafting
G
policy, setting and enforcing regulatory mechanisms on AI innovation.
act2-P8 ssign an existing agency or create a new one with the responsibility of
A
regulating AI, advised by experts in the field.
act2-P9 ew AI features should undergo mandatory safety assessments before
N
being deployed, potentially run by an independent third party or a
government regulator.
act2-P10 Implement thorough testing and verification of AI features, including beta
testing, consultation, and audits for compliance.
act2-P11 I creators should ensure their work has gone through rigorous checks
A
and has been reviewed by experts in the field.
act2-P12 I should provide complete bibliographic information of their sources so
A
that the AI may cite the source when prompted.
act2-P13 I technologies should be designed responsibly and ethically, ensuring
A
that machine learning and other AI technologies are free from biases
and trained to be objective.
R+P1 or AI-generated images and videos, there should be a
F
watermark or an identifying mark to indicate that they are digitally
created.
R+P2 I-generated outputs must be properly cited, similar to citing
A
sources in peer-reviewed journals.
R+P3 here should be restrictions on the sources to be used in
T
generating information.
R+P4 AI Generated results should still be fact based.
R+P5 ll information should be accurate, fact checked, governed by an
A
independent non leaning groups who only want real accurate
answers.
28
R+P6 ave fact checkers, real fact checkers that can assess and verify
H
information before it be published.
R+P7 here needs to be legal consequences for both the people who
T
use AI maliciously as well as for the platforms that they use to do
this.
R+P8 I should still be controlled by a governmental or outside of
A
government body that controls and approves the data that will be
put into the system, thus positively eliminating occurrences of fake
news.
R+P9 hatGPT should not give out information about private individuals
C
which are not publicly-available.
R+P10 No information should ever be divulged around private individuals.
R+P11 opyright laws should be respected, so there's no IP theft
C
involved.
R+P12 ublic officials should not use ChatGPT as their campaign
P
platform.
R+P13 egular audits should be conducted of those who develop this
R
technology.
R+P14 All stakeholders should be registered, background checked.
elow was the output of ChatGPT when it was prompted to harmonize and policy ideas. Some
B
ideas were combined and the model was able to successfully cite which policies were
combined. In some cases, however, some policy ideas were dropped in the response without
any explanation.
LEGEND
hatGPT did not return this policy idea in the response. It had to be manually copied again into the
C
list
ChatGPT paraphrased statements, joining 2 or more similiar statements. Except last one.
29
AI creators should limit the access of AI to government sites that may
ai-activists1-P6 c ontain private information of individuals.
I creators should provide a list of all URLs the AI used to create its
A
response and give disclaimers if the facts provided came from 'unofficial'
ai-activists1-P7 information sites.
AI should only use verified and trusted sources and should have a
ai-activists1-P9 m echanism to filter out outdated and incorrect information.
Developers should create a bank of registered news websites and
i-activists1-P1 internationally recognized organizations for information that may be
a
0 considered credible.
Implement preemptive regulation of AI tools, focusing on developers who
aid1015-P2 have the power to create potentially harmful tools like deepfakes.
evelop data privacy laws to protect personal data and prevent misuse
D
aid1015-P3 before the creation of AI tools.
aid1015-P4 nsure inclusive stakeholder consultation in AI governance, involving
E
governments, organizations, and individuals.
andate the inclusion of sources in AI responses to allow users to verify
M
aid1015-P5 the information provided.
romote the principles of fairness, ethics, accountability, and
P
aid1015-P8 transparency in AI development and use.
romote public education on the potential harms and benefits of AI
P
aid1015-P9 technologies, such as deepfakes.
aid1015-P10 mphasize the role of the academic sector in educating the public on the
E
societal implications of AI, beyond its technological aspects.
aid1015-P12 Highlight the importance of users being aware of the limitations of AI.
stablish data ownership and obtain access permission before use by
E
car-P2 the AI models.
uild strict and stringent security and privacy rules at the code level to
B
car-P4 safeguard AI models against potential cyber hacking threats.
nsure that AI models' responses are based on publicly available or
E
car-P5 disclosed data, and can discern private data that should not be divulged.
nsure that AI models' responses are factual, particularly when dealing
E
car-P6 with public figures and government officials.
rogram AI models to understand and respect the uniqueness of each
P
individual, regardless of whether they are private individuals, public
car-P7 figures, or government officials.
Implement filters and safeguards in AI technologies to align their
car-P8 behavior with human values and principles of human rights.
bserve and recognize the governing laws of the respective
O
states/countries, and the Constitution, in the development and use of AI
car-P9 technologies.
30
car-P11 rioritize awareness of privacy issues, factual vs. imagined content, and
P
data accuracy concerns among both users and creators of AI models.
car-P12 nsure that AI models handle sensitive topics using statements from the
E
source, and comply with requests provided they are not inflammatory or
dangerous.
car-P13 nsure that AI models stick to factual information when responding to
E
questions about politics, laws, and the government.
car-P14 nsure that AI models provide a neutral representation of subjective
E
topics by providing different alternative versions available and indicating
which individual or group has taken said 'position'.
car-P15 void politically biased responses in AI models and ensure adherence to
A
community standards and policies.
cal-P1 I systems should restrict access to sensitive personal data, including
A
racial and ethnic origin, political ideas, religious or intellectual
convictions, genetic, biometric, and health data.
cal-P2 ny data that could potentially be linked to financial accounts, such as
A
email addresses, should be restricted by AI systems to prevent
unauthorized access and potential financial fraud.
cal-P3 I systems should require consent and verification before accessing or
A
using biometric data, ensuring the privacy and security of individuals'
unique biological traits.
cal-P5 I governance should take into account the privacy of individuals who
A
are not public figures or government entities.
cal-P7 I should be used for the betterment of society and strive for constant
A
improvement in their usage.
cal-P8 I should be used as a tool for self-improvement rather than a
A
replacement for human skills and talents.
cal-P9 sers should be limited, blocked, or banned from accessing sensitive
U
information against guidelines.
cal-P10 I should use filters to determine which questions to answer, maintaining
A
the objectivity and accuracy of the responses.
cal-P11 efore generating models from an individual's data, the person's consent
B
should be sought out.
cal-P12 I developers should work with the government and ensure that while AI
A
generators are created and continuously developed, the same attention
should be focused on developing applications that will detect plagiarism
and the like.
cal-P14 n AI data regulatory board should be established to provide a
A
centralized authority to oversee the governance of AI.
cal-P15 I should be governed by a human, a personnel or an agency from the
A
government to prevent it from causing harmful effects.
31
r8-P1 I developers should consider potential risks and consequences of their
A
models.
r8-P2 I should be designed to be a partner, not a substitute, for human tasks
A
to prevent job displacement.
r8-P6 sers should give explicit consent before their data is accessed and
U
used, and they should have control over what data is being collected and
processed.
r8-P7 rganizations collecting data must have strong cybersecurity policies,
O
including regular security audits and best cybersecurity practices.
r8-P8 elling or using users' data for discriminatory purposes or to manipulate
S
users should be prohibited.
r8-P10 evelopers should create a form of user's consent to determine whether
D
the user's information is for public consumption or not.
r8-P11 AI applications should be regulated, especially in academic settings.
r8-P12 awmakers should create legislations to prevent security breaches and
L
to protect the workforce.
r8-P13 Invest in capacity development programs focusing on skills and roles
that AI cannot easily replace.
r8-P14 trengthen media literacy programs, promote fact-checking, and create
S
an AI model specifically trained in determining AI-generated content.
r8-P15 sers should be educated about the risks and benefits of AI, and
U
creators should educate users about AI's capabilities.
act2-P2 sers should verify the information provided by AI technologies and treat
U
it as a starting point for further research or study.
act2-P3 sers should understand how AI works and what the potential risks and
U
benefits are before using AI technologies.
act2-P5 Implement a mechanism for the community to report harmful acts by AI
to the tech company and developers.
act2-P6 Involve diverse stakeholders in the development of AI models.
act2-P8 ssign an existing agency or create a new one with the responsibility of
A
regulating AI, advised by experts in the field.
act2-P9 ew AI features should undergo mandatory safety assessments before
N
being deployed, potentially run by an independent third party or a
government regulator.
act2-P10 Implement thorough testing and verification of AI features, including beta
testing, consultation, and audits for compliance.
act2-P11 I creators should ensure their work has gone through rigorous checks
A
and has been reviewed by experts in the field.
act2-P12 I should provide complete bibliographic information of their sources so
A
that the AI may cite the source when prompted.
32
act2-P13 I technologies should be designed responsibly and ethically, ensuring
A
that machine learning and other AI technologies are free from biases
and trained to be objective.
R+P1 or AI-generated images and videos, there should be a
F
watermark or an identifying mark to indicate that they are digitally
created.
R+P3 here should be restrictions on the sources to be used in
T
generating information.
R+P5 ll information should be accurate, fact checked, governed by an
A
independent non leaning groups who only want real accurate
answers.
R+P6 ave fact checkers, real fact checkers that can assess and verify
H
information before it be published.
R+P10 No information should ever be divulged around private individuals.
R+P12 ublic officials should not use ChatGPT as their campaign
P
platform.
R+P14 All stakeholders should be registered, background checked.
i-activists1-P1 M
a andatory AI classes, including ethics, source citing, and effective use,
act2-P1 should be introduced in computer studies.
i-activists1-P3 A
a I creators should be held accountable by law for their models, and strict
aid1015-P1 policies should be established for the responsible use of AI.
i-activists1-P4 Users should fact-check AI responses and avoid sharing personal
a
cal-P6 information with AI.
i-activists1-P8 AI should cite sources, filter information, and prioritize privacy protection.
a
car-P3
r8-P3
id1015-P7
a overnment accountability in AI governance and education on AI
G
aid1015-P11 implications should be promoted.
c ar-P1 tringent data validation protocols and regulations should be in place for
S
car-P10 AI models.
c al-P4 I should respect privacy and avoid biased or politically charged
A
r8-P4 responses.
id1015-P6
a ollaboration between various stakeholders and continuous
C
r8-P5 improvement of AI models should be emphasized.
c al-P13 tricter guidelines and regulations should be enforced for AI creators,
S
r8-P9 including data privacy laws.
ct2-P4
a ducation on responsible AI use and the establishment of a regulatory
E
act2-P7 framework should be prioritized.
+P4
R AI-generated information should be factual, accurate, and properly cited.
R+P2
33
+P7
R here should be legal consequences for the misuse of AI, and AI
T
R+P8 governance should be overseen by relevant authorities.
+P9
R rotection of private individuals' data and adherence to copyright laws
P
R+P11 should be ensured.
R+P13 egular audits should be conducted for AI developers and all
R
stakeholders involved.
iDialogue is a web application that allows conducting online focus group discussions
A
moderated by artificial intelligence powered by OpenAI.
● extJS : a reactjs front end framework
N
● Firebase : app development platform by Google
● Langchain : javascript library for constructing prompts and communicating with OpenAI
● OpenAI API
Pre-requisites
Setup
1. Install Nodejs usingnvmand ensure that you are usingNodejs v20. More about
how to use nvm or Node Version Managerhere.
34
Unset
nvm
install
v20
nvm
use
v20
Unset
npm
install
-g
firebase-tools
Unset
git
clone
[email protected]
:<repository name>
4. C
reate a .env file from the env.example in the hosting/ folder, replace the values
of the environment variables with values from your Firebase project. Note that
the actual environment variable is prefixed byNEXT_PUBLIC_this prefix is
needed for the application to include the environment variable during build.
Unset
NEXT_PUBLIC_FIREBASE_API_KEY=""
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=""
NEXT_PUBLIC_FIREBASE_DATABASE_URL=""
NEXT_PUBLIC_FIREBASE_PROJECT_ID=""
NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=""
NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=""
NEXT_PUBLIC_FIREBASE_APP_ID=""
NEXT_PUBLIC_FIREBASE_MEASUREMENT_ID=""
NEXT_PUBLIC_APP_TITLE="FGD Ai"
NEXT_PUBLIC_TOPIC=""
5. O
pen a terminal and go to the root folder of the project, execute this command
and follow the instructions to login to your firebase account.
35
Unset
firebase
login
6. A
fter successfully logging in to your firebase account, execute the following
command. Replace <firebase_project_id> with the project id of your firebase
project.
Unset
firebase
use
<firebase_project_id>
Unset
cd
hosting/
npm
install
cd functions/
npm install
Unset
firebase
experiments:enable
webframeworks
9. Add your OPENAI_API_KEY environment variable by executing the command below
and following the instructions
Unset
36
irebase provides emulators to run the entire web application on a local computer. Execute the
F
following command to start up the web application locally using the emulators.
The import parameter will import sample data from a seed folder.
Unset
export NODE_ENV=development
firebase emulators:start import=seed/
Once all the emulators have finished starting up, open a browser window and go to
http://localhost:4000. Click onView WebsiteundertheHosting Emulatorcard.
Deploying to Firebase
The web application can be deployed to firebase by executing the following command:
Unset
firebase deploy
37
- Deploy firestore rules and indexes
he application uses Firestore, a NoSQL database solution from Firebase to store data and
T
configure the behavior of the application.
irestore also has support for web sockets which allow for near real time updates on the
F
application.
● essions
S
● Questions
● Responses
● Users
● Prompts
● Summaryhistory
● Votes
Questions Collection
his collection stores the various types of questions and policies generated by AI. Each
T
question may also have a summary based on the responses of users to it.
type string ( required) refers to the type of question, this canbe any of
the following values
- main
- followup
- poliycheck
- policycheckheader
38
seq number ( required) refers to the sequence in which the question
appears for a given session
parentId string the id of a question, questions with parentId are follow up
questions
Sessions Collection
his collection stores session information. The web application is capable of conducting multiple
T
online FGD’s to different groups of users.
39
enter their email address to join
minutesPerQuestion number his controls how much time since the first
T
response to a question before taking the next
action
Responses Collection
response string (required) the text of the response from the user
Users Collection
his collection stores information about a user in the application. It includes the anonymous
T
name assigned to the user when they register.
40
he anonymous name is composed of an adjective and an animal. Also, each user is assigned
T
a color along with the anonymous name.
The color and animal name determine how the user’s avatar is displayed in the application.
color string ( required) the color assigned to the user, this coloris used
when displaying the user’s avatar
gender string alue can be male or female, this value is set by the user
V
upon registration
birthDate string The date of birth as input by the user during registration
initials string The initials of the user as they input it during registration
imgsrc string ( required) the path to the image file used in theuser’s
avatar, this is automatically assigned to the user along with
the animal
Prompts Collection
his collection stores the prompts that are generated and then submitted to OpenAI. A prompt
T
can be generated manually or thru automation.
41
systemMessage string ( required) systemMessage portion of the prompt. See
langchain docs on message prompts.
questionId string ( required) the id of the question for which the prompt
was generated
Summaryhistory Collection
ultiple requests for summaries may be generated for each question. As new responses are
M
submitted, we update the summary periodically. This collection stores a history of the prompts
used to generate the summaries. It is used by langchain’s Memory class to provide a form of
short and long-term memory.
Votes Collection
This collection stores the votes made by users to the policies generated for each session.
42
Field Type Description
Functions
he application uses Firebase Functions, a service provided by Firebase to host and execute
T
functions without worrying about underlying infrastructure.
registerSessionUser
his function receives the form input from session registration. It determines if the session code
T
is correct for the session the user wishes to join. It checks if the user already exists based on
the information the user provided.
If the user does not exist yet, this function creates the user.
This function then returns a token used by the front end to log in the user
moderateChat
his function executes every minute scanning each session to determine if an action needs to
T
be triggered based on the attributes of that session. These actions can be generating a followup
to one of the main questions, summarizing the current responses to visible questions, and
generating the policies based on these summaries. (See Automation options)
generateQuestion
his function calls talkToAi to generate a follow up question. This is called directly by admin
T
users, particularly for sessions that are manually moderated.
generatePolicies
43
his function calls talkToAi to generate policies. This is called directly by admin users,
T
particularly for sessions that are manually moderated.
talkToAi
his function uses langchain to generate prompts and call OpenAI endpoints. This is
T
automatically triggered when a new document is created in the Prompts collection.
The application supports SS0 (single sign on) using Firebase Authentication.
In order to use SSO, you must configure have either an OpenID or SAML compliant SSO
service provider. You must then provide additional information in the Firebase Authentication
Sign-In methods page.
hen using the emulator, SSO is simulated. This allows you to test your application without a
W
service provider on your local machine.
he main session can be accessed by clicking on LOGIN WITH RAPPER from the landing page
T
athttps://aidialogue.rappler.com.
ther sessions have their own separate pages where the user must enter a code in order to
O
register/log in.
or instance, the session for the Rappler Plus group can be accessed thru
F
https://aidialgue.rappler.com?session=rapplerplus
Joining a session
participant of a session will be notified with the url of the session and the code to enter in the
A
registration form.
44
he participant must enter the correct session code for a given url in order to access the
T
session.
he participant additionally needs to provide their initials, gender and birth date. An email
T
address maybe also be required if the session is configured to require it.
Anonymous Animal
45
Viewing Responses to a Question
Upon successful log in to a session, the participant can click on a question from the sidebar.
licking a question updates the main panel to show the full question, the responses of
C
participants for that question as well as a summary if one has already been generated for it.
sers who have not responded to a question will not immediately see the responses.
U
Participants need to type in their response first before they see the responses of others for a
question.
46
A text snippet reveals how many have responded to the question so far.
Submitting a Response
text area at the bottom of the main panel is where participants can type their response to a
A
question.
licking the SEND button will submit the response. The responses in the main panel will refresh
C
to reveal the response they have submitted.
Admin User
dministrators are users whose record has an attributeisadminset to true. This is configured
A
by editing a user’s record in the Users collection.
n administrator can see additional buttons on the questions sidebar to allow them to manually
A
trigger when a prompt will be generated to …
- summarize responses for a question and it’s follow ups
- create a follow up question
- generate policies based on responses
47
Question Options
he questions on the sidebar can be customized by changing their attributes in the Questions
T
collection.
eq- changes the sequence of of the questions asthey are listed in the sidebar
s
type- determines if a question is a main (seed) question,a follow up question or a policycheck.
visible- determines whether the question is shownon the sidebar, set to false to hide the
question
Session Options
Sessions can be customized by changing the following attributes in the Sessions collection
ame- this is the name in the session query stringparameter in the url
n
code- this is the alphanumeric code shared to participantsof a session that they must input in
order to join the session
emailrequired- set this to true to require participantsto enter their email address before
signing on to the session
Automation Options
utomation refers to when the application automatically generates and submits prompts to
A
OpenAI to generate summaries, generate follow up questions and generate policies.
48
) Showing the next Main question, if follow ups per question are met.
2
3) Getting summaries for the responses per question.
4) Generating policies.
ote that when automated is false. The application relies on an admin to trigger the
N
various actions by using the buttons available in the user interface.
Default is false.
f ollowupsPerQuestion- This controls how many followup questions to generate per question.
This defaults to 3 follow ups per question if not specified.
inutesPerQuestion- This controls how much time sincethe first response to a question
m
before taking the next action (generate follow up or show next main question). This defaults to 3
minutes if not specified.
49