© vectorpocket from Adobe Stock
ChatGPT will disrupt how we search the Internet for information – and even, if and how businesses employ human resources to conduct certain tasks. While the promise of greater productivity looms, OpenAI, located in San Francisco in close neighborhood to the Silicon Valley, releases ChatGPT before legal and ethical issues related to the use of the AI are being resolved. Nothing has changed, even after huge problems caused by Silicon Valley based systems and business models in the decades before. Nobody seems to have learned his lesson.
A pletora of new legal issues
Law firm Advant Beiten has now published a blog article and preview of a publication for Haufe Wirtschaftsrechtsnewsletter, which discusses legal issues with ChatGPT. The reader’s conclusions: The legal system of our democracies is just not ready for the power of next-gen AI. And: Standards for value-based engineering VBE, such as IEEE 7000, need to proliferate faster, if damage to democratic societies and economies shall be prevented. Since ChatGPT involves the use of large amounts of copyrighted material, questions about intellectual property rights and ownership of the text generated by ChatGPT arise; authors and publishers might not be appropriately protected. ChatGPT might be misused to generate fiction which reads like true information, and misinforms readers who – in democratic societies – need to have a common understanding of the reality in order to make an informed voting decision.
Limitation of liability is not a solution
OpenAI, which releases ChatGPT under a far-reaching limited liability clause, imposes the liability for lawful use on users of ChatGPT. Contracts or General Terms and Conditions, however, by no means prevent the misuse of ChatGPT; they are only as good and ethical and binding as the parties to such agreements are. We therefore expect – in the nearer future – to not only see the benefits of ChatGPT, but also numerous litigations around the AI. Liability insurance funds for damage caused by AI, as envisioned by the EU, will not be suited to make up for the non-tangible effects ChatGPT can cause. Responsibility of AI developers therefore looks different from just releasing a powerful AI tool without hearing stakeholders before such release and refraining from taking over any responsibility for its impact.
Further reading: Advant Beiten's blog article (in German language)