Deepfakes and Other Legal/Ethical Concerns in the era of A.I.

AIMichelAngelo

Generative AI systems lead rapidly to several legal and ethical issues.

“Deepfakes,” or images and videos that are created by AI and purport to be realistic but are not. Heretofore, the creation of deepfakes required a considerable amount of computing skill. Now, however, almost anyone will be able to create them.

OpenAI has attempted to control fake images by “watermarking” each DALL-E 2 image with a distinctive symbol. More controls are likely to be required in the future — particularly as generative video creation becomes mainstream.

Generative AI also raises numerous questions about what constitutes original and proprietary content.

Since the created text and images are not exactly like any previous content, the providers of these systems argue that they belong to their prompt creators. But they are clearly derivative of the previous text and images used to train the models. These technologies will provide substantial work for intellectual property attorneys in the coming years.

It is clear that we are now only scratching the surface of what generative AI can do for organizations and the people within them.

It may soon be standard practice, for example, for such systems to craft most or all of our written or image-based content — to provide first drafts of emails, letters, articles, computer programs, reports, blog posts, presentations, videos, and so forth.

No doubt that the development of such capabilities would have dramatic and unforeseen implications for content ownership and intellectual property protection, but they are also likely to revolutionize knowledge and creative work.

If these AI models continue to progress as they have in the short time they have existed, we can hardly imagine all of the opportunities and implications that they may engender.

Source: Davenport, T. H., & Mittal, N. (2022, November 14). How Generative AI Is Changing Creative Work. Harvard Business Review.