Training your own Algorithms – AI and the Creative Process

This past January, the topic of AI had grown to dominate mainstream discourse about technological impacts on business and industry. Today, you’d be hard pressed to turn on the evening news, read your favorite publication, or scroll through a LinkedIn feed without stumbling on AI-related commentary. Of course, it is certainly no coincidence to see the fever pitch of hype occur right as OpenAI was closing on a Microsoft investment estimated to be in the order of $10 billion – as of this past April, OpenAI is now valued over $27 billion. 

Earlier this year, I penned a blog article as a way to offer a modest critical evaluation of the terms of service for popular platforms driving much of the AI-related conversation – including Midjourney, StabilityAI, and ChatGPT. I highlighted the problematic aspects for adopting these tools with respect to intellectual property rights, copyright, and other ethical concerns for creative professionals. Months later, the points I highlighted in the article related to terms of service and copyright have become even more pronounced as there has also been new legal determinations indicating that AI-generated content based on prompts cannot be copyrighted according to the US Copyright Office: “When an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology — not the human user.”

The context here is important: the term ‘AI’ in today’s mainstream discourse (and seemingly in the USCO guidance) has generally converged around popular platforms that have been pre-trained on massive quantities of data outside of the user’s influence or control. As such, the user is relegated to the role of the ‘prompter for’ not the ‘creator of’ works (excluding more significant creative post-production of the work in the aftermath)

Beyond prompting black box AI platforms – designers should consider ways to orchestrate their own novel implementations through the sourcing of relevant data and uses of algorithms that are best suited to their design problem.

However, I believe that there is a wider opportunity for the creative process relative to the underlying curation of data and training of AI-powered tools. Beyond prompting black box AI platforms – designers should consider ways to orchestrate their own novel implementations through the sourcing of relevant data and uses of algorithms that are best suited to their design problem. The chief outcome here is a level of creative control and intentionality for the AI-powered pipeline which – in my view – leaves little question for the influence of human originality in the orchestration of a creative process and its outputs.

The opportunity for algorithm training for use in creative processes is an area Proving Ground has been exploring for several years going back to the first release LunchBoxML as an open source plugin in 2018. Here, machine learning modules – a subset of AI – are provided as Grasshopper components that can be calibrated and deployed within the computational design graph. Designers are able to train their own predictive models for use in aiding problem solving and exploration. Since the plugin’s release, we’ve seen uses of this toolkit ranging from performance predictions, space navigation, and layout selection. (The Design Transactions journal published a paper authored my David Stasiuk and I which discusses some of our own uses in 2020)

With LunchBoxML, Grasshopper users can deploy a range of training models using data they’ve procured with algorithms of their choosing.

More recently, we published a new version of LunchBoxML which expands the workflow and implements Microsoft’s ML.NET framework for various machine learning workflows and algorithms. Grasshopper users can deploy a range of training models using data they’ve procured with algorithms of their choosing. Another central feature in this new development includes the ability to save, reuse, and expand on their trained models over time.

We need more tools that provide designers with agency over a complete data-driven design pipeline and uplift skills to bridge today’s uneven adoption of new capabilities.

I had the privilege of sharing these new developments to a group of workshop participants at the Shape to Fabrication conference hosted in London this year. Together, we explored opportunities for how to use Rhino’s geometry and data to fuel predictive scenarios – including regression, binary classification, and multiclass classification models. (examples are now available on our LunchBox documentation resource.)

While the studies here are basic tutorial-level examples to help introduce users, I believe they are suggestive of a possible trajectory for broader AI implementations with special attention to the importance of the training process:

  • Data sourcing: Designers are able to make determinations on appropriate data sources, quantity of data, and quality of data needed for their problems.
  • Algorithm selection: Designers are able to select and calibrate algorithms to support better predictive outcomes such as avoiding ‘overfitting’ or ‘underfitting’ of solutions.
  • Interrogating outputs: By having more direct control over data inputs, training processes, and algorithms, designers are able to more fully interrogate and evaluate the validity of their outputs and predictions.
  • Ethical considerations: By being in control of the data sourcing and training pipeline, designers are given tools to interrogate underlying ethical concerns with ‘data hungry’ applications (eg. underlying bias in data, how data was created or sourced)

Ultimately, I believe AI and sub-domains – including Machine Learning – are important concepts that can be leveraged by creative endeavors beyond ‘black box’ services. To achieve this we need more tools that provide designers with agency over a complete data-driven design pipeline and uplift skills to bridge today’s uneven adoption of new capabilities.


What’s next?


Special thanks to LunchBoxML contributors…

  • Nazanin Alsadat Tabatabaei Anaraki
  • Andrew Payne
  • David Stasiuk

The tools wouldn’t be where they are today without them!