0% found this document useful (0 votes)
4 views

Presentation Czy

Uploaded by

czy20011215
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Presentation Czy

Uploaded by

czy20011215
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Hello I'm Ziyi Chen, today I'm going to show the topic of the data issues in the

age of AI, I'm going to explain it from three aspects.

First, the source of the topic. My topic is inspired by the unit5……


This one reminds me of what Baidu's 李彦宏 once said in 2018,……

Nowadays, we talk a lot about privacy


We've also seen a number of privacy breaches, such as Facebook Datagate.
but there is a significant difference between ‘data privacy’ and ‘the right to
protection of personal data,’ so I'd prefer to use the latter to describe the
issues of AI.
Now, let me elaborate on the difference between……
I will talk about 3 differences, firstly, the nature of the right is different, the
former is mainly a passive personality right, which ‘can usually only be
claimed by the right holder when the right is infringed’, while the right to
personal data is an active personality right, which ‘can be actively exploited
by the right holder in addition to passive defence against infringement by
third parties’
Secondly, the scope of the objects of the two rights is different: the objects of
the former are mainly private data, while the objects of the latter include non-
private data in addition to private data.
Thirdly, the content of the right is different, as the content of the former
mainly includes the maintenance of the peace and quiet of one's private life,
the non-disclosure of one's privacy, and the self-determination of one's private
life, and so on, while the latter mainly refers to the ‘domination and self-
determination of personal data’.

Firstly, the complex operation of AI systems and the opacity of their


automated decision-making make it difficult for users to realize the
corresponding right to informed decision-making.
Although various platforms will inform users of the intention and scope of
collecting their relevant information in the form of user terms of service, many
users still accept these terms without reading them. Some privacy policies are
poorly readable and understandable and require users to have a long
educational experience to understand them. These result in user terms of
service being reduced to a mere formality.

When personally identifiable information is not obtained, algorithms can still


piece together a picture of an individual by association and classification.
Or they can violate the information privacy of an entire group of people after
obtaining their identity, and given the more subtle forms of violation of group
information privacy, it is increasingly difficult to identify and assess the
corresponding violations and consequences.
In light of the last point on the previous page, we should ask the question:
what is our data being used for?
It's true that using user data can provide us with a more personalized service,
but if a platform's advertising revenue accounts for more than 90% of its total
revenue, what are algorithms with user data more likely to be used for?
For example, for content distribution and social media platforms, behind the
free service is the commoditization of users' private data - that is to say, the
platforms will abstract your privacy, attention, netizen interactions, likes, and
even emotional fluctuations into data analytics, which will then be integrated
and packaged, and combined with the data of many more users to form a user
profile that will seek to more accurate advertising positioning.

This is what is called: when an online service is free, you are no longer a
customer, but a commodity.

In the famous article ‘The limits of transparency: Data brokers and


commodification’, Crain points out that for platforms, user data is no longer
data, but a commodity. The commodification of user data is destined to be
packaged on different platforms and black markets, and traded through the
hands of data brokers to levels that are untraceable by any platform,
organization or individual.

Let's go back to the original question, do we really not care about our privacy?
Maybe we think our little bit of personal data is nothing, but the combined
data of countless users, modelled by algorithms, could be of incalculable
value - predict city traffic, flu trends, and even the next general election.
Of course, with regard to data issues, we can also talk about discrimination
and prejudice, labor exploitation, etc.
Does artificial intelligence really help to solve these problems? Or does it
exacerbate them in another form of so-called ‘high technology’?
I think these are things we need to think about as students of the humanities
and social sciences.
That’s all. Thank you.

You might also like