top of page

The IP Rights of Artificial Intelligence

Updated: Apr 30, 2023

In this article, I have discussed about a recent trend being pushed by various marketing firms, law scholars and industry players about the interesting problems surrounding around this similar notion that intellectual property rights, which are granted to humans, governments, companies or any proper legal entity, can be granted to artificial intelligence systems and technologies. There may be some proper arguments in the favour of the proposition, in the fields of technology and jurisprudence. However, my view is that the evolution of the digital technologies such as AI (or even AI-integrated) which we exemplify, have still not been a part of that saturation to promote the possibility of granting such rights. This article can be considered as a counter-propositional article to begin on this question of recognition of IP rights of AI systems and technologies.


The CEI and SOTP Classifications of ISAIL: A Quick Recap

Let us take a quick recap on the article in which I had discussed about the legal status of artificial intelligence technologies. As per the classifications provided by the Indian Society of Artificial Intelligence and Law on entitative status of artificial intelligence technologies - there are 2 clear ways to do it - CEI and SOTP.



As per this diagram, I had also proposed that the any AI technology/system can be manifestly present/available within any other class of technology in the tangible forms, that we understand. So, you might require machine learning tools, in a blockchain-based system, or maybe IOT and RFID tags used require the internal support of AI technologies for execution purposes. Sometimes, maybe even any class or sub-class of AI technology could exist within any class or sub-class of AI technology, if it is possible, in legitimate terms.


Now, the idea of being manifestly available changes legal theory perspectives on deciphering what kinds of rights, privileges, liabilities and agencies can be accorded on any AI system/technology, since on a case to case basis, it is obvious to consider the situation becoming bleak or uncertain - because it could also lead any law professional to interpret such incidence in reductionist terms, heavily. Sometimes, technologies are embedded in certain ways as industrial trends shape up that disputes can still be addressed (keeping other factors aside for a while, if) in an ordinary fashion. Yet, in the field of judicial governance and alternate dispute resolution, especially in the technology law domain, it would be rather prudent to assume that complications may genuinely or obviously arise on the agency of any technology being put to use.


This at least shows a simple phenomenon that unless proper trends are adapted with, sweeping generalisations on the status of AI technologies being recognised in legal systems, cannot be made.


Dr Jeffrey Funk, a technology consultant (formerly at NUS Singapore) recently gave an intriguing example via a LinkedIn post, which is related to a prediction made by the University of Chicago that data and social scientists have developed an algorithm that forecasts crime by learning patterns in time and geographic locations from public data on violent and property crimes. However, the model predicts using historical data and does not predict specific events. Gary Smith writes for Mind Matters an article on the same issue, where he provides an informed critique on algorithmic criminology. An excerpt from the article is provided below:

Algorithmic criminology is now widely used to set bail for people who are arrested, determine prison sentences for people who are convicted, and decide on parole for people who are in prison. Richard Berk is a professor of criminology and statistics at the University of Pennsylvania. One of his specialties is algorithmic criminology: “forecasts of criminal behavior and/or victimization using statistical/machine learning procedures.” He wrote that, “The approach is ‘black box’, for which no apologies are made,” and gives an alarming example: “If I could use sun spots or shoe size or the size of the wristband on their wrist, I would. If I give the algorithm enough predictors to get it started, it finds things that you wouldn’t anticipate.” Things we don’t anticipate are mostly things that don’t make sense, but happen to be coincidentally correlated.

Now, in the field of AI Ethics, discussions have already shifted from attaining responsible AI (according imagined responsibilities on the AI system/technology and their creators) to explainable AI, where the questions revolve around the classic black box problem where algorithms lack explainability to human data subjects (if we take the GDPR lexicon, for example).


This infographic clearly shows that in general, with exceptions, the responsible AI condition for any AI technology/system could be pre-emptive or ex-ante, to prevent any harm/damage to human data subjects (for example). In general, the explainability of AI technologies comes into question when it is a routined necessity to check them or when impact assessment has to be done. There are nuances in both the concepts’ materialisation, without any doubt.

Let us now understand the problem behind even granting IP rights to AI technologies.


The “Rights” of AI Technologies/Systems within IP Law

It is a basic understanding that rights, duties, liabilities, facets of accountability and even agency of any tangible legal entity has to be decided on a clear and factual basis. Jurisprudence may be old, and it could be possible in the case of digital technologies that precedents might not even exist in many Global North and Global South countries. Nevertheless, sometimes the regulators intervene (AI policies, for example, India’s NITI Aayog’s Responsible AI Reports of 2020), legislative competence and approaches are sharpened (for example, European Commission’s draft proposal of the Artificial Intelligence Act) or the judicial bodies intervene and define some new principles or norms (for example, Commissioner of Patents v Thaler [2022] FCAFC 62, which is overturned as of now).


Now, there are many general reasons why such soft or hard interventions are undertaken. Some of the reasons are outlined as follows:

  • Some scholarly opinions by any judge or member of a regulatory/executive/legislative body could have contemporary relevance, and their groupthink could be taken into account to mobilise the process of policy formulation, acceptance and democratisation, from an industry point of view.

  • In the field of law, it becomes necessary to start defining at least some basic aspects of a technology taken into scrutiny. Without any first principles, there is no possible to ensure accountability of the companies and creators who are benefiting the use of the technology as a subject matter. The European Union’s Artificial Intelligence Act (draft) in its Annex 1 provides a narrow definition of what constitutes artificial intelligence, for example.

  • All states and their judicial and executive functionaries are the closest means to ensure that some relevant intervention or action is sought. For example, in India, the Delhi High Court has come up with important judgments on Twitter not adhering to The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which then (along with public comments) contributed discussions to review the 2021 Rules and notify amendments in the same.

A general problem which emerges in the case of intellectual property law is that be it copyrights, trademarks, patents, industrial designs or even integrated circuits, it is duly important to signify at least some corollaries of what rights or duties or agency do we have to even accord to any artificial intelligence technology. The challenges are immense since much formalisation is not possible to happen unless the technology trends in the industry, globally and nationally, are stable. If companies are manufacturing AI technologies and systems which are weak on their explainability and during their cycle of making - their efficiency has even not been tested properly - there is no reason to provide any special legal faculties to AI technologies under IP law.


For example, people can claim granting the rights of an inventor to an AI system, or claim that some AI software must be granted copyright for “making” some artistic work.



The problem however is that the human touch to any creation, is something, which cannot be - replaced with algorithms-based anthropomorphism. This is similar to the case of the predictions made by researchers at the University of Chicago about an AI system predicting a crime even before its happening. AI hype is a serious problem because it shields the Black Box problem of AI systems and technologies. It also ensures in the market that the regulators fail to develop informed and sustainable rules and approaches to address the relevance and use of AI technologies, sector-wise. Cindy M Grimm provides an interesting example in an article published by Brookings Institution.

We can illustrate this failure with a simple example. Let’s say the program manager requests a robot system that can see an apple and pick it up. The actual implementation is a camera that detects red pixels that form a rough circle. The robot uses two consecutive images to estimate the location of the apple, executes a path that moves the gripper to the apple, then closes the gripper fingers and lifts. When deployed, the robot mistakes the picture of a hot air balloon on a shirt and tries to drive the gripper through the person in an attempt to pick it up. This failure is not at all a surprise given the implementation description but would come as a shock to the person who was only told that the robot could “see apples and pick them up.” Many of the failures that seem to plague robots and AI systems are perfectly clear when described in terms of implementation details but seem inconceivably stupid when described using anthropomorphic language.

The same case could easily apply to Creative Adversarial Networks (CANs), a subset of Generative Adversarial Networks. S Will Chambers, for Towards Data Science explains how CANs work:

In the GAN, which the authors call a Creative Adversarial Network (CAN), a generator network creates images and a discriminator network, which is trained on 81,500 paintings, critiques the generated images based on aesthetics. Interestingly, when the CAN images were placed beside contemporary human artworks, human evaluators could not tell which images were artificially-generated. In many cases, the CAN images were rated aesthetically higher than the human artwork.

Now, aesthetics has its own value, in philosophical and real terms. The reality however is that critiquing aesthetics does not ever mean possessing the tendency of creating something better. From a propositional aspect, using CANs to use generated images and develop aesthetically advanced or sophisticated “artwork”, might seem to be lucrative. However, mastering two skills - aestheticisation and critiquing human-made aesthetics can only have a human touch when humans are aware and doing it voluntarily. In the case of CANs, it seems that when algorithms anthropomorphise - there is no human touch or involvement. Human-made works, with their own mores, imagination, rules, biases and realities are left to algorithmic scrutiny, which again is not human-centric.



Maybe, in certain aspects of artistic evaluations or creativity, those who are trained in CANs and GANs might use these algorithms as a project, or for any commercial and other use. However, granting any specific rights to an AI systems does not make sense. The right to critique even under a basic understanding of international human rights law, especially, Art. 19 of the International Convention of Civil and Political Rights (read with the UDHR of 1948) - is a freedom of expression right. Freedom of expression, be it under a libertarian understanding of rights - needs to have a human touch, because that system of understanding is definitive to protect human freedom of speech and expression against the excesses of a state. Let us assume that an AI system generates images using CANs, then why should it be even granted any IP rights?


The Council of Europe, a premier multilateral body embracing global governance on issues of international human rights, under its former Ad Hoc Committee on Artificial Intelligence, recognised the human rights-centric aspect of artificial intelligence ethics. Emanating rights to human beings is sacred and fundamental, because human creativity, is truly, human. Algorithmic creativity is anthropomorphic and does not account to clear solutions and precedents. Here is a discussion I had with Gregor Strojin, the former Chair of the Ad hoc Committee on Artificial Intelligence for the Council of Europe months ago.


For reference, I recommend readers to watch this discussion with Maksim Karliuk, at 14:31, on Human-centered Artificial Intelligence.


Dr Richard Self has also provided strong arguments against the recognition of IP rights to AI systems and technologies. The arguments are provided as follows, with some elaborations:

1) As far as we know, AI systems do not have the ability to reason. All current LLM systems are pure stochastic parrots and even lack the ability to understand their "knowledge".

It means that AI systems and technologies beyond even the questions of responsibility and explainability, have in general - not reached the ability to understand knowledge. Philosophically, it could be argued that the understanding has to be human-centric or maybe it could be anthropomorphised. The former seems to be a legitimate criterion and not the latter considering the harms of algorithmic anthropomorphism. If the latter option is chosen, for convenience reasons, then its life cycles must be regulated and tested with due dilligence. Knowledge management is delicate and within the domains of law and management, must be addressed reasonably, especially when technologies like artificial intelligence are the subject-matter. The Annex 1 of the EU’s Artificial Intelligence Act, is one of the most controversial and significant examples to look out for.

2) Any current form of "AI invention" is normally shown in systems that try many different solutions, such as genetic algorithms. This is not creativity, nor is it usually any form of reasoning.

This argument makes sense because invention and creativity are not the same. There could be, in the human case, creative efforts behind generating or discovering something. Yet, it does not make a case for AI technologies and systems.

3) All current forms of AI are very narrow and cannot transfer knowledge between different domains. 4) Non-human entities, such as a software system also do not have any rights to real property and are owned by humans or institutions and businesses. As such, an AI system is just a tool that humans use to rapidly analyse different options. The resultant IP then rests with the natural human who posed the question and guided the tool towards a solution, or possibly, the IP is retained by the organisation for whom the person works.

On point 3: AI can be categorised as Narrow, Weak and Strong AI as well. Measuring the “narrowness” of AI technologies can be done through many relevant methods - including quality and life cycle assessment, impact assessment, auditing, data quality, algorithmic explainability (the black box dilemma), due dilligence, etc. There has not been any AI technology ever found which can transfer knowledge from an X domain to a Y domain. This is a human skill, with human orientation, which again, has been formalised for years and centuries. This point surely is sensible.


On point 4: Dr Self has pointed out the most complicated aspect of even granting IP rights to AI technologies: corporate governance & ethics. IP rights are used by legal entities and when companies (be it MSMEs, large corporations, start-ups) or any other possible legal entity owns the IP, can develop strategic knowledge resources for their internal and/or knowledge uses. Whether it becomes legally relevant for a regulator to intervene is a subjective question, as for different classes of AI technologies/systems, there are different multi and cross-industrial requirements. Specific cases, perhaps can be taken into regard to further unfold and study the phenomenon.


Conclusion

It is pertinent to note that the classes of AI technologies, and their manifest availability makes them further complex and uncertain (unless properly tested and documented) in terms of their legal/juristic status. If and only if IP rights are to be even granted to any such technologies, it should be the primary requirement of a State to define the legal and juristic entitative status of the class of AI technologies. The European Union’s approach to begin with the process is quite promising. How much sectorial that approach is, remains to be seen. Instead of asking hyped pop culture-inspired questions on the legal status of AI, it is necessary to study the contours of regulating AI technologies (their development, production, usage, auditing and impact assessment) and how corporate governance & knowledge management affects the strategic role and inclusion of AI technologies.

Unless otherwise specified, the opinions expressed in the articles published by Visual Legal Analytica, the digital publication are those of the authors. They do not reflect the opinions or views of Indic Pacific Legal Research LLP or its members.

Sign up for the Membership of the Indian Society of Artificial Intelligence and Law

In association with VLiGTA®, ISAIL (since 2019) is excited to announce the opening of membership applications for the Society. We are interested to have lawyers, data & AI engineers, entrepreneurs and public policy professionals to join the ISAIL Members community to foster discourse on AI regulation and innovation, especially in legal and policy technologies in India.

Indian Society of Artificial Intelligence and Law logo

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

bottom of page