0% found this document useful (0 votes)
132 views7 pages

Intellectual Property Rights

The document discusses issues around granting legal personhood to artificial intelligence systems. It raises questions around whether AI systems could be held civilly or criminally liable. It also discusses concerns that establishing rights for AI could degrade human rights and whether AI systems have the characteristics needed to be extended rights like humans. Overall, the document seems skeptical that providing legal personhood or rights to current AI technologies is a good idea and argues the focus should remain on holding users and manufacturers responsible.

Uploaded by

Ishan parashar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
132 views7 pages

Intellectual Property Rights

The document discusses issues around granting legal personhood to artificial intelligence systems. It raises questions around whether AI systems could be held civilly or criminally liable. It also discusses concerns that establishing rights for AI could degrade human rights and whether AI systems have the characteristics needed to be extended rights like humans. Overall, the document seems skeptical that providing legal personhood or rights to current AI technologies is a good idea and argues the focus should remain on holding users and manufacturers responsible.

Uploaded by

Ishan parashar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

II Year B.B.A., LL.

B – Semester-IV (2021)
INTERNAL ASSESMENT II

JURISPRUDENCE & LEGAL THEORY

NAME: Siddharth Singh Rajput

DIVISION: B

PRN: 20010126814

COURSE: BBA LL.B. (H)

BATCH: 2020-2025
Artificial intelligence is founded on the idea that human intelligence may be described in
such a way that a computer can simply duplicate it and do tasks ranging from simple to
sophisticated. Artificial intelligence's goals include simulating human cognitive processes. To
the extent that they can be concretely characterised, researchers and developers in the field
are making unexpectedly rapid progress in simulating tasks such as learning, reasoning, and
perception. Some predict that inventors may soon be able to create systems that can learn and
reason about any subject faster than humans can. Others, however, are suspicious because all
cognitive activity involves value judgments based on human experience.

However, the studies presented raise the question of whether and to what degree robotics is
legal. Concerning the influence of AI, whether positive or negative, it will be critical that the
legal sector be nowhere left. Overall, even the most advanced super-intelligent technology
lacks self-awareness, and the question that remains is whether AI has infiltrated the legal
sphere and proven to be powerful enough to survive.

The article AI and the Limits of Legal Personality, in my opinion, unequivocally questions
the issues posed by the filling of voids in the legal system. The first question is if there will
be any difficulty in granting a legal personality to a robotic machine and drawing an analogy
with companies, which have been identified as artificial legal entities. The second point is
whether or not a machine may be given rights, and if so, do those rights supersede ours? The
inclusion of artificial intelligence (AI) in legal rights would degrade human rights and their
relevance. Accepting technology ways to refine transparency can lead to indistinguishable
human rights, which is an ethical challenge. The third question moved further in terms of
comparison, articulating the right to equal status as humans.

To return to the first question, the recognition of AI as a legal person remains. The current
legal system distinguishes between artificial and natural methods of identifying people. The
concept of AI has social benefits, but there is also an interference in achieving liability,
because in human rights-friendly states, it would be logical for legislators to severely limit AI
intervention in citizens' personal lives and establish criminal liability for the most dangerous
violations of personal interests. Though general AI is still science fiction, it raises the
question of how legal status might shape or constrain behaviour if or when humanity is
overtaken.

Acceptance of AI legal personhood is based on some supporting evidence that holds the
obligations. Is it possible for the robotic machine to be held civilly liable for the damages it
has caused? These entities, which act in the place of corporations, are protected as artificial
legal persons with the ability to sue and be sued. Contractual obligations are a typical system
of civil applications. Robotics taxation has been proposed as a way to offset the reduced tax
base and job displacement expected as a result of automation. The industries that have
demonstrated AI's capability must be in charge of debts and intellectual liabilities. This
would not confer any robotic system personhood, and the liability that attracted the
contractual sessions would vanish.

However, if civil liability is shielded, what about criminal liability? Because technology has
the potential to be a deadly weapon against civilization, it must be regulated. The presence of
a guilty thought is required to establish criminal culpability, but this is not the case with
inchoate crimes, where the criminal abandons the act at the last second. However, the
question remains as to whether the obligation will be assigned to robots or its user. The most
obvious candidate is if the machine is defective, in which case the manufacturer will be held
liable. However, in certain instances, the user of the system will most likely be held liable
under the vicarious liability legislation. So the robot gets a pass? Although the robot's
threshold is higher than that of humans, the circumstances for the same must be met. The
robot must be (1) equipped with algorithms capable of making nontrivial morally significant
decisions; (2) capable of communicating its moral decisions to people; and (3) allowed to act
on its surroundings without immediate human supervision. The term "smart robot" is used
throughout this article to describe a robot that meets these criteria.

The second point concerns the importance of human rights and whether they will be ignored
if rights to robotics systems are established. Technology provides us with evidence anytime
we need it, but it can also be used against our will to escort valuables. Humans are adept with
the aid of cognitive senses and have the ability to back out at will, giving them greater
freedom of choice, yet the system may propose something incorrect that could end in a
deadly incident.

This case, for example, has frequently been used to draw inferences about algorithm bias
discrimination. Because the significant information provided to the framework for producing
the AI calculation incorporates dispositions against some parts of society, this issue
demonstrates clear human bias in yield. As a result, the erroneous data is spread and
implanted in the framework, and the calculation is always focused on a specific group of
people.
This part attempted to demonstrate how AI systems have been working on personality
entitlements through intrinsic rights. The discussion focused on how those characteristics
might manifest, as well as a detailed examination of the human characteristics that they
mimic and whether those traits will be extended to Non-Human-Personality. The fed
information in AI system that stimulates the behavioural structures demonstrates on the basis
of commands and that needs to be compatible with laws has been functioning because of its
natural personality of humans. The legal systems have been imprinted with the assumption
that human beings are the only ones who can access rights, and that no extension could be
stated, relying entirely on inherent rights.

Providing rights and legal individuality causes no less damage than what has already been
done. Giving the ability for dictatorship to work on instructions and live in supremacy is the
same as giving the notion of assumption of functioning in human structures. Human brains'
reasonable and moral approach has fetched the sum required to socialise AI systems to
human behaviour. From an ethical standpoint, determining if AI capabilities comply with
moral norms is difficult. Some concerns involving cognitive capacity are beyond the ability
of an AI system to comply with modern ethics. For example, someone may use AI's amazing
power in a morally correct way to predict self-destruction or other negative consequences of a
person's mental state. The rule to limit the use of artificial intelligence would be ineffective in
terms of protecting rights since persons whose rights are infringed on would be unaware of
the violation. Furthermore, they would lack the financial means to enforce their infringed
rights.

Overall, this appears to be a thoughtful experiment rather than a mere notion. When this
intrudes into realistic ways that haven't been there before, it will become a serious worry.
There might be no guarantee as to how beneficial this will be, or whether a tiny amount of
comfort will enough. There will be potential issues with the enforcement of the prohibition
on some software based on AI techniques. The ramifications are enormous, and they will
only get worse as long as the system is tolerated. In this scenario, the negative implications
are more obvious.

The legal personhood of AI systems is harmed, as evidenced by the arguments. There


couldn't possibly be a compelling argument why providing entitlements would be preferable
to the current situation. The incomplete questions are addressed in a variety of ways. The
presumption of holding people accountable has weakened in recent years, and linking it with
human rights is meaningless. The android fallacy predicated on unstated future endeavours
personality would not satisfy in some places. The cooperative solution relies on existing legal
classifications. Granting legal rights to AI systems is pointless, because instead of holding the
user or manufacturer responsible, the bar for holding a robotic device liable appears to be
high. In the current scenario, artificial legal persons such as companies have a mind and body
controlled by humans; nevertheless, realistic approaches have been taken, such as
autonomous automobiles and the change to insuring both the driver and the vehicle.

The predictability of technological adoption in the future remains true. Legal principles with
acceptable constraints now reign supreme. It appears pointless to establish a new legal
organisation to grant rights to a robotic technology that is thought to share human
characteristics.
Artificial intelligence plays an essential role in today's social development, and people have
higher expectations for AI as a result of its intelligent progress. Sophia was created by
Hansen Robotics in 2016. She can express emotions through facial expressions and tone
changes, in addition to looking like a human. At the moment, it can be considered a fairly
successful AI robot. Sofia was granted citizenship by the Dubai government in 2017, making
her the first robot to do so. Hansen, the machine's creator, also predicted that in the future,
people and robots will be indistinguishable, and AI will evolve to a critical point before
becoming humanity's buddy. The term "indistinguishable" does not imply consistency, but it
can reason independently like humans and share many other human traits. This raised the
topic of whether AI should have the same rights as humans, and whether this is a sufficient
cause for AI to be granted citizenship. Artificial Intelligence has been ingrained in our lives.
When asked whether Machine Intelligence should be better protected under human laws, the
answer is yes. The main problem is that a created computer can be matched with human
intelligence. The reliability of robotic devices recording behavioural standards in future
circumstances is questionable. Humans are the only ones that can recognise emotional
behaviour and consciousness through sensory information.

According to the criterium, computers will be able to earn rights once society has advanced in
terms of technology. The conclusion is that, unlike human-made objects, computers can
exhibit the same behavioural structures. Concerning the prerequisites, it is claimed that AGI
and NBI will be able to meet them and will be granted citizenship.

It has not been compared to human intellect or impulses, but it is aware of the harm that is
being done. If the system can match its code to the location where it is stored, it can deduce
that its circumstances are correct. Stephen Hawking believes that artificial intelligence will
doom humanity. Elon Musk has been vocal about the perils of unbridled AI, and has even
formed organisations to try to avert what he believes would be disastrous. The question that
this raises is whether machines are capable of thinking and reasoning. The Turing test, a
mimic test that establishes a situation and determines if a sufficiently advanced PC might
vaguely work in a progression of co-operations ignorant of its nonhumanness in a similar
capacity as a person, is used to answer this ambiguous question.

We've gained a better understanding of algorithm, intellect, and computation. But the burning
question is: where does this lead? Are we building or destroying our future? In line with
many authors' forecasts that NBI has the ability and is the solitary cause of humanity's
destruction, healing on the part of intelligence we're dealing with is required.

Humans have legal protections, and the scope of the complete structure of rights will be
extended to NBI. Human beings are endowed with impulses and expressions that they cannot
possess at any time. NBI's rights cannot be granted for several years. It has made regarding
how courts of law can punish NBI systems monetary quantities may still be awarded under
the assumption that an NBI's right to own property is recognised in the judicial system.

NBI will be satisfied with the rights of human structure in can, no room. The behaviours and
inclinations of AI systems do not have to be positive; the implications are a far greater part of
the picture than is visible. Taking a look at Sophia, who was granted Saudi Arabian
citizenship, the indignation for women's rights was exposed. The Singularity is the most
dramatic depiction of AI's and allied technologies' future. It is possible that future
technologies will be different as a result of improvements, and that solutions to questions
about reasoning machines and how the law treats such creatures will be expected.

Artificial intelligence will have a significant impact on society. It will usher in a new period
of human social growth, similar to the first industrial revolution, but we must never forget
that the goal of AI development is to better serve human civilization, not to replace humans.
Simultaneously, AI rights must be protected in order to ensure peaceful cohabitation.

It is hoped that the discussion on obtaining AI citizenship would spark some ideas.

Although Dubai is the first region in the world to award AI robots citizenship rights, the
United States, Japan, China, and other countries have already identified AI development as a
national strategic plan. In the future, the conflict between humans and intelligence will
continue to foster societal growth. The barrier between is not something that can be achieved
by a single individual or country, but rather by the combined efforts of all humanity.

You might also like