A splash of cold water – considering AI, terms of service, training data, and copyright

Update (1/17/2023): Since publishing this article, a class action lawsuit was announced that will challenge Stable Diffusion and services including Midjourney, Stability AI, and DeviantArt. Read more here

Author’s note: This article began as a general review of terms of service statements used by a variety of popular AI generators – this topic peaked my interest when I began observing professional architectural designers employing popular services in their commercial work. AI-based technology has grown in popularity in the last year primarily due to the availability of easy and accessible services with pre-trained algorithms driven by a massive amount of capital investment (at the time of this writing, Microsoft is reportedly closing on a deal to invest $10 billion into OpenAI – creators of ChatGPT and DALL-E). Hidden under the hype for these tools are topics that should give professionals pause regarding their data, copyright, and the ethical nature of underlying training data. This article explores some of the more pragmatic considerations but expect more commentary from us in the future on the impact of these technologies and opportunities for responsible implementation.

Rather than simply ‘cheering on’ the inevitability of technology, I believe that it is incumbent upon experts and specialists to responsibly evaluate emerging technologies from all angles: Where might the technology provide benefit? Where might the technology cause problems or even unanticipated harm? In weighing these attributes, only then is it possible to set out strategies that can offer productive implementations of new tools. This is especially true with some of the most popular trends hitting our social media feeds today.

Rather than simply ‘cheering on’ the inevitability of technology, I believe that it is incumbent on experts and specialists to responsibly evaluate emerging technologies from all angles.

It is – after all – the central lesson of Jurassic Park: the danger of being preoccupied with whether or not you could and not stopping to think if you should. And we all know how that popular story ended up for John Hammond…

Arguably today’s hottest tech trend involves AI – more specifically the ‘AI generator’ tools to produce captivating images, well composed writings, and (semi)usable software code. The output of these tools is created from simple natural language prompts. Do you want an image of a building designed in the style of a Zaha Hadid building and rendered during a winter evening and illustrated in the style of Piet Mondrian? Simply ask for it from your favorite AI generator – whether it be Midjourney, DALL-E, Stability AI, or one of many other emerging platforms. (see this article image generated with a prompt on Midjourney) 

The concept is quite seductive – images and writings that would have taken a person hours or days to produce using conventional means can now be generated in minutes or seconds. Of course, early-adopting architects have gravitated towards the use of image generators to create renderings of buildings – many of which look like they were produced using sophisticated 3D modeling software, the latest rendering engines, and masterful photoshop abilities. Instead, the result is the output of an AI algorithm that – at its core – is trained on extensive libraries with billions of existing images.

While it is easy to become seduced by new capability, it is also just as easy to overlook risks and unanticipated consequences of the same capability.

With a platform like Midjourney, an architectural designer can provide a general description of what they want to see and – within seconds – be presented with a compelling illustrations that would have otherwise taken a great deal of time and skill to produce. A cost-conscious business owner or client may look at this capability and question: Why exhaust expensive creative hours when a similar result can be nearly instantaneous by comparison?

It’s seemingly magic and – honestly – it’s a fun concept. However, while it is easy to become seduced by new capability, it is also just as easy to overlook risks and unanticipated consequences of the same capability.

There are many dimensions to consider about the opportunities and risks for implementing AI generators in today’s creative work. The artistic merit and the human relationship with AI will continue to be debated in the near- and long-term. However, in my view there are some pragmatic aspects to the use of these tools that should be considered when determining the opportunity for using today’s popular services. Many of these considerations and risks are made readily apparent within many AI generator’s terms of service.

The goal of this article is to improve awareness among creative professionals and businesses that are looking to employ popular versions of these capabilities in their work today by reviewing the terms of service, sources of training data, and copyright concepts.

Terms of Service

The terms of service (TOS) of software might otherwise be known as the daunting legalese you quickly ‘accept’ so you can move on to using the software. However, as concerns about data ownership, security, and privacy continue to grow, I have found it increasingly important to review the TOS for the apps I make use of – especially ones on the cutting edge.

While there has already been much writing covering the uses and outputs of AI generators, the terms of service that sets up a legal framework for the service – in my view – has been the subject of less popular media scrutiny.

While there has already been much writing covering the uses and outputs of AI generators, the TOS that sets up a legal framework for the service – in my view – has been the subject of less popular media scrutiny. Recently, Proving Ground performed a basic survey of the various Terms of Service documents associated with popular AI generator tools. I believe the findings should give many users pause – especially when considering professional uses.

Let’s take Midjourney’s terms of service – one of the most popular AI generators with 5.7 million Discord members. In the context of their free service, the assets produced by the AI are licensed to the user: “Midjourney grants you a license to the Assets under the Creative Commons Noncommercial 4.0 Attribution International License”. However, while the paid tier provides more flexibility for commercial use, the user grants “to Midjourney, its successors, and assigns a perpetual, worldwide, non-exclusive, sublicensable no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute text, and image prompts you input into the Services.” In other words, your prompt – including any input images you submit – and its generated output may be used or recycled by Midjourney without recourse. 

Similar terms and conditions are present in every service we surveyed. With Stability AI, “you hereby grant Stability a nonexclusive, worldwide, royalty free, fully paid up, transferable, sublicensable, perpetual, irrevocable license to copy, display, upload, perform, distribute, store, modify, and otherwise use such materials” By using OpenAI’s services – including ChatGPT or DALL-E – “you agree and instruct that we may use Content to develop and improve the Services” (they provide an option to ‘opt out’ via an email)

These types of terms are not exclusive to upstart companies like OpenAI. Recently, Adobe came under criticism by introducing an ‘opt out’ policy for desktop apps stating that “Adobe may analyze your content using techniques such as machine learning (e.g., for pattern recognition) to develop and improve our products and services.”

If you are a design firm working on confidential projects for a client and you use some of these technologies, granting a broad license to a project asset – such as images – is likely a non-starter.

If you are a design firm working on confidential projects for a client and you use some of these technologies, granting a broad license to a project asset – such as images – is likely a non-starter. Would a privacy-conscious developer or confidential government project respond well to a striking project concept appearing in a TechCrunch article about the impact of AI? Likely not – but this is completely possible under these terms of service.

Training Data Sources

Another area that should give professionals and businesses pause when evaluating AI generator services is the source of the underlying ‘training data’. In the world of AI and machine learning, training data is what is used to make an AI platform effective. The more quality data an AI can learn from the better the output. For example, Stability AI uses “the 2b English language label subset of LAION 5b https://laion.ai/blog/laion-5b/, a general crawl of the internet created by the German charity LAION.” (LAION 5b is a library of 5.85 billion labeled images.)

However, there is growing concern among artists and creatives about potential copyright infringement by using the image libraries in this manner. The methods used for collecting images for training (e.g., sourced from web crawling) do not presently provide mechanisms to obtain permission or consent from the original authors. Nor is it presently possible for a user of these tools to provide credits to the original works used in a resulting AI output. This leaves open the possibility that an AI generated image may infringe on an original work’s copyright – likely without the user being aware.

The methods used for collecting images for popular training models (e.g., sourced from web crawling) do not presently provide mechanisms to obtain permission or consent from the original authors.

In fact, this area of copyright and IP law is likely – as this Silicon Republic article describes – a ‘legal minefield’ for users. It is quite obvious that the companies creating AI generators are well aware of the possible consequences here:

Stability AI’s FAQ seemingly punts on the question of copyright in its images with a general statement: “The area of AI-generated images and copyright is complex and will vary from jurisdiction to jurisdiction.” Furthermore, Stability AI’s TOS grants that “you own the Content that you generate using the Services to the extent permitted by applicable law.” In layman’s terms: the content is yours so long as the content is legal. Similarly, OpenAI also directs responsibility for this matter onto the user – in their terms of service, they stipulate that “You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms.” 

In this context, it is especially important to note that in our review, neither OpenAI nor Stability AI provide any warranty on if the generated content is legal or not.

Meanwhile, Midjourney’s terms are far more blunt: “You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved.” This term is followed by “If you knowingly infringe someone else’s intellectual property, and that costs us money, we’re going to come find you and collect that money from you.” (yes, this is a  direct quote from their terms of service as of 1/10/2023).

When it comes to the potential legal minefield with AI generators, users of the services seem to be on their own!

Copyright

The question of copyright is relevant to both the training data as well as any generated output. For the training data – as noted above – the use of training data obtained without consent from the original author raises questions about copyright infringement within the AI backend itself. By extension, the question of copyright to the output image is also a point of debate and concern.

It is currently an outstanding question of if, legally speaking, an AI generated image or writing can even meet the threshold of originality for copyrighted works.

Ultimately, the use of AI generator’s has yet to have precedent set in this area because the applications are so new. It is currently an outstanding question of if, legally speaking, an AI generated image or writing even meets the threshold of originality for copyrighted works.

Presently, I find it debatable that an AI-generated output could meet the threshold – especially in the context of the implementations described here. Assuming it does not meet the originality threshold, creative professionals using third-party AI generators to produce content could find themselves in an uncomfortable place where their generated content could be taken, remixed, and reused without their permission. Midjourney explicitly recognizes this possibility in its terms of service as a facet of its community: “Midjourney is an open community which allows others to use and remix your images and prompts whenever they are posted in a public setting. By default, your images are publically viewable and remixable. As described above, you grant Midjourney a license to allow this.”

Ironically, this would seem to be full circle as the training data is fueled by original authors having had works used for training without their consent.

Towards Responsible Implementation

This article leans heavily into skepticism and critical analysis of the current implementations of AI generators and – in reality – touches on only a small subset of the concerns and debates this technology is already prompting (pun intended). For example, this article does not touch on harmful bias that is often present in trained AI – nor does it discuss the topic of artistic expression in relation to AI outputs. Because the technology is so new, I would expect that we will undoubtedly see rapid evolution of philosophical dialog adjacent to the growth of legal precedent that will be set in response to concerns about copyright and intellectual property.

We will undoubtedly see rapid evolution of philosophical dialog adjacent to the growth of legal precedent that will be set in response to concerns about copyright intellectual property.

Thinking forward, there are a number of items I believe it will be incumbent upon creative professionals to be advocates for to support healthy and ethical adoption trends with this technology:

  • Users should call upon AI platform developers to be transparent with how the underlying source training data is collected and implemented into an AI generator. Users have the right to fully understand how the outputs are being generated – especially if they are ultimately carrying the liability for the output per the terms of service.
  • Users should call upon AI platform developers to take action in gaining permission and consent from original authors whose content was used in the training model. Furthermore, users should have the ability to trace back the output to the sources used to create it.
  • Third-party commercial tools built to use AI generators should strive for transparency about what backend generators are being implemented, how the training model was composed, and the TOS tied to the generator.
  • Users should strive to curate their own training data sets that meet their needs and standards. While it is a much steeper hill to climb for an AI implementation, I would posit that many of the critical pragmatic observations in this article are greatly alleviated in cases where the designer has developed their own novel training models with data in their control. For example, Professionals may consider hosting a private instance of Stabled Diffusion (provided open source under the MIT license) and work towards curating their own trained model.

I would posit that many of the pragmatic observations in this article are greatly alleviated in cases where the designer has developed their own novel training models.

Ultimately, it is clear that the world of AI applications is evolving quite quickly with many enthusiasts working to build hype for the technology. However, It’s not enough to take these powerful technologies at face value – there are numerous legal, ethical, and creative debates to be had in determining AI’s full impact. I feel we need a splash of cold water to help us fully understand these opportunities and better predict unforeseen outcomes.

(Note: No, this article was not written using ChatGPT – beware human minds at work…)

Important References 

Note: the quotations used in this article  regarding specific terms of service were sourced as of January 10, 2023.