Artificial Intelligence (AI) and Law

This international interdisciplinary conference is being hosted by the University of Cape Town's Faculty of Law in July 2024. We invite you to join us to consider all facets of the relationship between Artificial Intelligence (AI) and Law.  

Dates: 3 - 5 July 2024

Where: Kramer Law Building, University of Cape Town (UCT)

The Kramer Law Building is situated on Cross Campus Road, on UCT's Middle Campus in Rondebosch, Cape Town, South Africa.

Call for Papers

The conference will be organised as a combination of plenaries, roundtables and parallel sessions on the various themes listed below.

* Work in Progress will also be accepted for a works in progress stream.

Abstract Submission

Abstract submission open: 1 June, 2023

Abstract submission deadline: 30 September 2023

Decision date: 30 November 2023

Submitting is easy! Just use our handy submission form available here.

All abstracts must be submitted online. No other mode of submission will be accepted. Abstracts must be 400 - 500 words long. Please indicate in the submission form AND in the first line of your abstract the stream that you are submitting to. If you do not indicate the stream that you are submitting to, the conference organisers will allocate your paper to a stream at their discretion.

Review process:  The organising committee, assisted by subject matter experts in each theme, will conduct a single-blind review process to evaluate all submissions. Please contact us if you have any questions about this process.

Stream and Theme Descriptions 

The conference streams and theme descriptions are listed below in alphabetical order.

Benefits and challenges of AI to development processes in Africa

Convenor: Professor Ada Ordor, Centre for Comparative Law in Africa (CCLA)

Africa’s development co-operation and economic integration over the past six decades has seen the growth of major regional economic communities across the continent. These regional blocs have attained varying levels of integration, demonstrated in the activation of community institutions such as regional courts, parliaments and financial agencies. Many institutional processes, including court registry documentation are being digitised for efficiency and ready access to users of these services.

Furthermore, the integration of digital trade provisions in the African Continental Free Trade Area (AfCFTA) initiative signals a resolve to leverage technological trends to expedite integrated economic development on the continent. A key concern however is with the capacity of a spectrum of legal instruments and institutions governing intra-Africa commerce and other regional processes to support these digitised processes in the long term for optimal benefits.

On the other side of the coin, the deployment of artificial intelligence systems in the digitisation of integration places another layer of complexity in a development trajectory that seeks to achieve, not only economic growth, but also political stability, social cohesion and cultural preservation. This stream explores the implications of AI for the progressive attainment of goals adopted in Africa’s regional instruments.

BACK TO TOP

AI, Company Law and Corporate Governance: Exploring Ethical and Responsible Implementation of AI in Company Law and Corporate Governance

Convenor: Dr Mikovhe Maphiri 

This stream will discuss the impact of AI on company law, corporate governance and corporate social responsibility (CSR). We welcome submissions related to the following themes:

  • Ethical considerations in AI implementation: Topics could include issues such as data privacy, bias in AI algorithms, and accountability in decision-making.
  • Governance frameworks for AI: Topics could include regulatory considerations, industry standards, and best practices for ethical AI implementation.
  • AI and corporate social responsibility reporting: Topics could include the use of AI in ESG reporting, impact measurement, and stakeholder engagement.
  • AI and stakeholder engagement: Topics could include the use of AI in customer engagement, employee relations, and community outreach.
  • AI and board oversight: Topics could include board composition, training and education, risk management and the impact of AI on the potential legal liability of directors and officers.
  • AI and compliance: the potential of AI to support compliance with and implementation of the provisions of companies legislation

BACK TO TOP

Emerging financial technology and the law

Convenor: Mr Ben Cronin

Rapidly emerging financial technologies are deployed in the public and private sector of the banking, insurance, financial services and taxation industries. This includes the use of (explainable) artificial intelligence, machine learning, private and (possibly) state engineered blockchain-based technologies (incl. crypto assets), e-payment forms, as well as related financial technologies. Deployment of these technologies for surveillance by regulators and public sector agencies is becoming ubiquitous with significant potential to enhance law enforcement and anti-corruption initiatives. Social goods such as financial inclusion, voluntary compliance and others, may improve. This multi- and transdisciplinary forum calls for papers that will consider any of these developments and assess their implications for law in South Africa and/or any other jurisdiction.

BACK TO TOP

Intellectual Property and AI

Convenor: Professor Caroline Ncube, DSI-NRF Research Chair: Intellectual Property, Innovation and Development

Papers in this stream will engage with the challenges and conundrums that AI poses to Intellectual Property (IP) law. There is a growing body of case law from several jurisdictions on certain aspects, whilst some cases are in progress and some aspects are yet to be found any litigation. Participants and presenters will examine and explore these issues under  the following themes:

  • authorship and inventorship
  • ownership
  • infringement
  • AI and IP enforcement
  • AI and IP administration

BACK TO TOP

The use of AI in International Dispute Resolution

Convenor: Dr Faadhil Adams

Artificial intelligence is already being used in many sectors of the international commercial legal industry. It has been used: in court systems to predict the outcomes of cases and would likely have the potential to be deployed similarly in arbitration cases; by major law firms around the world to determine the suitability of a party appointed arbitrator on the basis of the awards that they have rendered, their political and legal leanings and any potential biases; as a technique to facilitate discovery; and it has been used to analyse arbitral awards to determine whether the presiding arbitrator did in fact draft the award. These instances show the potential and varying impact of artificial intelligence on international dispute resolution. This stream proposes the consideration of the current and future benefits and pitfalls that artificial intelligence could have on international commercial arbitration.

BACK TO TOP

AI and the digitalisation of the workplace: The many labour law challenges

Convenor: Professor Rochelle le Roux

AI poses many challenges to the workplace. Not only may it cause job losses and also shape the employability of workers, but it may undermine the essence of employment; i.e. the employment relationship, change the demarcation of the traditional employment sectors, and also introduce new forms of discriminatory practices into the workplace. This stream proposes to investigate and understand the challenges presented by AI and the digitalisation of the workplace, to evaluate the readiness of labour law and to suggest appropriate responses by the legislature.

BACK TO TOP

Harnessing the Potential of Legal Technology: Examining the Impact and Future Developments of AI in African Law and Beyond

Convenor: Intaka Centre for Law and Technology

This stream will explore the utility and impact of legal technologies in the legal domain (“legal tech”) under four main themes:

  • Impact of Legal Technology on Law in Africa

This theme will explore the ways in which legal technology has influenced and transformed the practice of law across Africa. Papers presented under this theme will discuss the adoption of technology driven legal solutions on the continent and the benefits and challenges associated with their implementation.

  • Legal Informatics and Digitisation of the Law

The second theme will delve into the importance of legal informatics in the digitisation of the law, exploring the benefits and potential pitfalls of digital transformation in the legal sector.

  • How Technology Supports and Enhances the Legal Sector

Theme 3 will explore the myriad ways in which technology can support and improve the legal sector. Papers will discuss the potential applications of AI in various legal contexts and how these applications can contribute to a more efficient, fair, and accessible legal system.

  • AI and its Impact on the Judiciary in Africa

The fourth theme will investigate the impact of AI on the judiciary in Africa, examining how the adoption of AI-driven solutions affects the judicial process, decision-making, and the overall efficiency of the justice system.

By examining these four themes, the stream aims to provide a comprehensive and forward-looking analysis of how technology can support and enhance the law in Africa and beyond. Through engaging presentations and robust discussions, the stream will offer insights into the future developments of legal technology, promoting a better understanding of its potential and encouraging further research and collaboration in this exciting field.

BACK TO TOP

AI, Law and Rhetoric

Convenor: Centre for Rhetoric Studies

The remarkable, recent developments in Artificial Intelligence (AI) are expressed through Large Language Models. Humans have used their linguistic abilities to train AI systems to increasingly do the same, to achieve interoperability through language. Whereas AI used to be separated into different fields (speech recognition, robotics, computer vision, etc.), the recent boom has seen an amalgamation, an inverse tower of babel, whereby interoperability allows for learning beyond what has been understood as possible.

At this point where not even the world’s experts know where AI is headed and whether it is possible to distinguish between the work of an AI and a human, we must use the tools that we have to pave a sustainable path forward. At this nexus lies communication and persuasion. With recent developments indicating that some AI models can predict, to reason, to strategise, there is a pressing need to ask the questions that will guide an increasingly uncertain future. With a new rhetoric affecting and shaping a new legal system, we must use the existing systems to help understand and shape these developments.

This work stream will, through a series of papers, bring together in one panel discussion some views of AI developments from rhetorical studies.

BACK TO TOP

AI, Gender and the Law

Convenor: Assoc Prof Kelley Moult

Gender, AI, and the law intersect in a wide variety of important ways, and may present both opportunities and red flags. For example, while AI can improve systems to serve the needs of women, gender biases are also reproduced in systems and can perpetuate or amplify gender stereotypes and biases, leading to discriminatory outcomes. AI technologies offer important opportunities for data collection and analysis, but can also raise concerns about privacy and consent. AI tools can be useful in detecting and preventing problems like domestic violence and online harassment, but may well raise questions about potentially impinge on the rights and privacy of victims and survivors.

This stream will interrogate the intricate relationship between gender, AI, and the law. Papers are invited that discuss challenges and opportunities that arise at the intersection of these domains, and the potential that this rapidly evolving landscape shows for harnessing AI for a more equitable and just society.

BACK TO TOP

AI, International Law and the Use of Force

Convenor: Associate Professor Cathy Powell

International Law regulates the use of force in two main respects: it places restrictions on whether states may resort to force at all (the jus ad bellum), and it regulates how parties to a military conflict may engage in such a conflict, a body of law referred to as jus in bello, or International Humanitarian Law (IHL). Artificial Intelligence (AI) has the potential to play a significant and potentially disruptive role in both of these areas.

With respect to the jus ad bellum, AI can be used to detect and predict threats to a state and even respond to these threats automatically, such as in the case of cyberattacks against a state’s electronic infrastructure. It can also play a role in determining whether force should be used in response to such threats, and how. It therefore becomes essential that the machine learning informing – or making – the decisions in this regard is trustworthy, transparent and safe from manipulation. Yet machine learning is notoriously opaque and AI may itself be an instrument of manipulation, through the creation of deep fake videos and other material which may deceive their targets into self-destructive or unlawful conduct.

In the arena of IHL, AI has already long been used for weapons. Algorithms are employed not only to convey information to the humans in control of unmanned weapons, but also to choose and attack targets autonomously. 

The legal issues that arise from the use of AI in both areas move well beyond the responsibility of states for the harm caused by AI. International Law also needs to develop a framework to determine when and how AI should be used in these areas at all, such that its use is regulated and its misuse prevented.

BACK TO TOP

Crime, Risk and AI: the new frontier

Convenors: Professor Clifford Shearing & Dr Annette Hubschle

As artificial intelligence capabilities are being rapidly integrated into policing, security and surveillance technologies, the world of crime prevention, detection and risk management is changing with emerging challenges and opportunities for humanity. The Crime, Risk and AI stream seeks to consider how the AI revolution is reshaping harm and risk landscapes and how best to regulate these emergent technologies, in light of the profound ethical dilemmas associated with them. While AI offers much promise in enhancing the governance of security in multiple domains, it also introduces new risks that call for rigorous analysis and debate. Examples of topics in this stream include: 

  • AI’s role in policing: an examination how machine learning and predictive analytics are shaping law enforcement and crime prevention.
  • Ethics, profiling and bias: the moral complexities and potential biases underpinning AI-driven decision-making in security governance.
  • Cybersecurity and crime: juxtaposing AI’s instrumental role in detecting and countering cybercrime and their emerging risks.
  • Risk management: how AI technologies are recalibrating risk assessment, compliance and strategic decision-making. 
  • Conservation law enforcement: analysis of how AI technologies including machine learning and drone surveillance are shaping biodiversity protection, real-time monitoring, predictive measures to fight poaching and habitat destruction.

BACK TO TOP

AI, Education and Law

Convenor: Sukaina Walji

The rapid uptake of Generative AI in 2023 is impacting teaching and learning in educational institutions globally. The multipurpose nature of these tools lends them to a multitude of use cases in education, and teachers and students are exploring and responding to the availability and use of these technologies. While AI is having an immediate impact on current teaching and learning practices and pedagogies, especially with regards to academic integrity, the real-world uptake of these tools brings to the fore the need to consider how these tools will shape and inform future curricula and how and what is taught in courses and programmes in an AI-enabled world. In particular, the availability of generative AI is shaping the teaching and learning of foundational skills or literacies such as critical thinking, writing, long-form reading and argument. Traditional pedagogies for teaching and assessing these foundational literacies are challenged by the availability of generative AI, with responses including a focus on assessing and demonstrating process and reflection rather than outputs. 

At the same time, there are ethical issues such as lack of transparency, copyright implications and inherent social biases in the training data of many of these AI tools that may be reflected in outputs, which bring up questions around how to ethically encourage uptake of these tools in African and Global South contexts. In an increasingly AI-enabled future, teachers and students need to develop foundational AI literacy capabilities to make informed choices. This stream encourages submissions that consider how generative AI is impacting education and pedagogies, the teaching of Law and the development of Law future curricula. Abstracts may respond in general to these issues or focus on a particular area as below:

  • Experiences of how pedagogies, teaching or assessment practices have responded to or been shaped by the uptake in generative AI  in educational contexts
  • What are the graduate capabilities for working in a world of AI in relation to Law education?
  • What does “AI Literacy” mean in the context of the teaching of Law and how might it be incorporated into the curriculum?
  • How can AI be used in ethical ways to support the teaching and learning of key foundational literacies?

BACK TO TOP

AI and the Philosophy of Law / Legal Theory

Convenors: Prof Jaco Barnard-Naudé,UCT Law; Dr. Scott Timcke, Research ICT Africa; and Dr. Andrew Rens, Research ICT Africa

The interaction of law and AI is routinely framed as the need for law to adapt to an innovative technology. It is assumed that innovation is a good in itself that unfolds according to an inherent logic, and that what is required of law is to mitigate harms that might result without challenging the nature of the innovation. The collision of AI and law presents an opportunity to re-examine these inarticulate premises and their associated dynamics.

There are also questions for legal theory. Multiple jurisdictions have, or are in the process, of regulating AI. The contrast between the approaches adopted is revealing of different philosophies towards innovation and the legal treatment of technology. In China the emphasis is on prohibiting AI from subverting the political order while in Europe regulation is aimed, if unsuccessfully, at the avoidance of harm. By contrast the United States has largely refused to regulate AI in service of permissionless innovation.

What those approaches have in common is a consequentialist logic, not unlike the consequentialist logic driving AI. But there are alternatives to consequentialism in the philosophy of law that emphasise rights, values and principles. How might African experiences and its humanistic thought inform legal theory.

Proponents of AI claim that Artificial General Intelligence (AGI) will be achieved in a few short years. This is accompanied by an assumption that AGI would be treated as persons in the law. There are already claims that AI systems should be treated as inventors, authors and protected speakers - each an implicit claim for legal personhood. But are these claims justifiable?

We invite papers that respond to, reframe, and overhaul these and other questions in AI and the philosophy of law. Submissions can also address questions like:

  • What are the implications of a presumptive pro-innovation stance for law?
  • Must law create conditions for AI to be a successful innovation?
  • Should AI regulation follow a consequentialist logic? What alternatives are available and how would they change regulation of AI?
  • Would a legal decision by an automated system be just?
  • What jurisprudential questions are raised by the possibility of legal personality for AI?
  • What do different philosophies around law and political economy suggest for the regulation of  AI?
  • What are the demands that human rights places on regulation of AI?
  • How might AI ethics inform legal responses to AI?

BACK TO TOP