“We shape our tools and thereafter our tools shape us” (Culkin/McLuhan) is an often cited quotation in technology forums. The observation refers to a cyclical feedback loop that exists between the tools we use and the work we perform with them. Moreover, the quote refers to how these feedback loops can ‘shape us’ – our attitudes, our behaviors, and our values.
This cycle is observable by evaluating trends in architectural design and technology. For example, an architecture firm that holds sustainability as a core design value is also likely actively investing in robust tools and workflows for environmental analysis. Additionally, the wider availability of more intuitive environmental analysis tools has arguably influenced the adoption of sustainability-oriented standards, processes, and goals.
If we desire to further specific values, we find tools to help us in the pursuit of those ambitions. As tools become more available and accessible, their availability might encourage us to join in on a new pursuit.
Today, Artificial Intelligence (AI) is experiencing a rapid adoption trajectory among architects and construction industry professionals. Much of today’s discourse on the AI adoption curve tends to focus on pragmatic concerns of business impact: What tasks and jobs will AI offset or replace? What new AI-enabled roles and skills are going to be required? How can we use the latest AI features for design? What specific efficiency/speed/quantity/quality outcomes should we expect with AI?
However, I believe that AI’s impact extends far beyond the pragmatic and is in a position to shape us more than any other technology in recent memory (other than perhaps the internet itself). As a class of technology, AI presents numerous philosophical and ethical challenges for architecture professionals ranging from confidentiality, intellectual property, and environmental impact. It is incumbent upon architects to evaluate the profound ethical implications AI introduces relative to the values, sensibilities, and standards that the architecture profession has historically upheld.
It is incumbent upon architects to evaluate the profound ethical implications AI introduces relative to the values, sensibilities, and standards that the architecture profession has historically upheld.
In this article, I will use the American Institute of Architect’s Code of Ethics and Professional Conduct to contextualize a series of ethical concerns presented by today’s adoption of AI. While my article will cite the AIA documents directly, architects working around the world will likely find similar alignment with their own professional codes (such as the RIBA Code of Practice). I will further offer tactics that an architecture professional can employ which help balance the potential benefits of AI technology while also mitigating problematic outcomes.
Standards of Competence and Responsible Control
As AI has become adopted in various professional circles, stories have emerged regarding reckless and negligent use of AI systems. This past summer, a US lawyer was sanctioned after their court filing made reference to nonexistent court cases generated by ChatGPT. As recently as September 2025, California issued a $10,000 fine where 21 of 23 quotations in a court brief were faked by ChatGPT.
Even as AI systems are becoming more capable, their blind use remain a risky undertaking with regards to tendencies to generate inaccurate and false information. If AI is used to support a building code study or to develop a design narrative, architects should recognize that Rule 1.101 of the AIA’s Code of Conduct states that “Members shall demonstrate a consistent pattern of reasonable care and competence”.
Even as AI systems are becoming more capable, their blind use remain a risky undertaking with regards to tendencies to generate inaccurate and false information.
Furthermore, the use of AI systems presents a challenge to the concept of the architect’s ‘responsible control’ over their deliverables. AIA Code of Conduct Rule 4.102 cites that “Members shall not sign or seal drawings, specifications, reports, or other professional work for which they do not have responsible control.” While this rule is framed in the context of supervising the work of licensed consultants, it is not a stretch to assert that the AI technology landscape today requires professionals to now consider AI agents and platforms in their oversight and control over professional activities.
Tactic #1 - Exercise Responsible Control: The use of generative AI systems are now an inevitable component of the architect’s toolkit for producing deliverables including images, drawings, and reports. To meet standards of competence, care, and responsible control, professionals should be diligent in enacting internal policies and procedures for reviewing the adequacy, accuracy, and quality of AI-based outputs.
Bias and Prejudice
AI systems use underlying training data to make predictions. Based on the selection and weighting of training data, an AI system may learn from historical data that is reflective of biases present in society. This can lead to outputs that reinforce stereotypes and perpetuate inequalities. This has become a concern in medicine and health care where AI systems are being used in various areas of clinical decision making. One example: “Research shows that when race is used to estimate kidney function, for example, it can lead to longer wait times for Black patients to get on transplant lists. This is often due to an overestimation of when the kidneys will give out.”
When using AI to support architectural design, professionals should be aware of the potential for AI bias as a matter of exercising “unprejudiced and unbiased judgment when performing all professional services.” (AIA Code of Conduct Canon III – Obligations to the Client). During design, AI image generators have become a popular means of producing visualizations of spaces, urban environments, and buildings. Yet, these generators have also demonstrated a tendency to exaggerate stereotypes. “Even images of everyday objects — such as doors and kitchens — showed bias. Stable Diffusion tended to depict a stereotypical suburban U.S. home…In reality, more than 90 percent of people live outside of North America.”
Tactic #2 - Evaluate and Critique Bias: As AI tools become more prevalent in different stages of the design process, the importance of professional design critique is an important consideration. Like with evaluating outputs for accuracy, design professionals need to critically evaluate the work and deliverables: is the architectural product reinforcing and exaggerating the past or enabling the design to move into the future?
Copyright and Intellectual Property
In recent years, AI has been at the center of numerous legal developments related to copyright and intellectual property. The US Copyright Office has provided guidance that AI output cannot receive copyright if it is solely the result of prompts. It must pass a threshold of human-influenced originality. Additionally, AI providers have been accused of copyright infringement by using content training data without the consent of original authors.
In April 2025, twelve US copyright cases against OpenAI and Microsoft by prominent authors over claims that AI systems, like ChatGPT, have been trained using copyrighted materials without consent. In June of 2025, Disney filed a US federal lawsuit against Midjourney, a popular AI image generator, for “calculated and willful” copyright infringement.
As a creative industry, architects need to remain vigilant with regards to the use of AI systems and the potential copyright infringement. For example, as dictated in their Terms of Use, OpenAI places any responsibility for the inputs or outputs in its system squarely on the user: “Input and Output are collectively “Content.” You are responsible for Content, including ensuring that it does not violate any applicable law or these Terms.”
As a creative industry, architects need to remain vigilant with regards to the use of AI systems and the potential copyright infringement.
As professionals, architects have an obligation to “not knowingly violate the law” (AIA Code of Conduct Rule 2.101) and that “includes the federal Copyright Act, which prohibits copying architectural works without the permission of the copyright owner.”
Tactic #3 - Provide Transparency: The dust has yet to fully settle with regards to US Federal Copyright law and the use of AI-based systems. As a creative profession, architects will need to stay aware of the evolving copyright laws and risks associated with using AI systems with regards to their intellectual property. To help mitigate risk and in the interest of transparency, architects using AI may opt to implement an AI disclosures and disclaimers citing the use of AI in the course of their services.
Trust, Security, and Confidentiality
Reading the ‘fine print’ is becoming more essential for adopters of AI systems. Complicated end user license agreements (EULA) are becoming packed with terms that can introduce unanticipated risks to customers of an AI service. These might include terms where the user grants the AI company with rights over any data submitted to the system.
To get reliable AI output, it is not uncommon for users to submit important contextual information such as written documentation and supporting images. This context can help an AI system deliver more relevant output responses for the user. It is not a stretch to imagine these systems might be used, knowingly or unknowingly, with sensitive or confidential information.
Sensitive or confidential information might become disclosed to a third-party and even potentially used in further training or enhancement of the AI system.
The risk is that sensitive or confidential information might become disclosed to a third-party and even potentially used in further training or enhancement of the AI system. From the standpoint of a client relationship we can refer to AIA RULE 3.401 defining that “Members shall not knowingly disclose information that would adversely affect their client or that they have been asked to maintain in confidence”
Tactic #4 - Review Terms of Use: Reading and understanding the ‘fine print’ (EULA and Terms of Use) of AI systems is more important than ever. These documents will dictate terms for user privacy, confidentiality, data use, and ownership over AI output. It has become common for EULA’s to contain provisions that allow AI service providers to train their AIs on data you submit. In some extreme cases, an AI provider may also assign themselves rights to use or publish outputs you generate.
Energy, Water, and Climate Change
The energy impacts of AI are becoming well documented. AI systems are powered by data centers to train, store data, and field user queries and prompts. A Washington Post investigative report found that generating a 100-word email with AI required the equivalent of one bottle of water to cool the data centers involved and used the electricity capable of powering 14 LED light bulbs for an hour. By 2028, it is projected that between 6-12% of US electricity will be attributed to data centers. The energy demand has grown so substantial that AI providers, like Google, are quietly scaling back or removing their environmental goals in favor of AI datacenter buildouts.
The energy demand has grown so substantial that AI providers, like Google, are in favor of AI datacenter buildouts.
The effects of AI on energy and water use should give all architects pause in consideration to the obligations to “promote sustainable design and development in the natural and built environments and to implement energy and resource conscious design.” (AIA Code of Conduct Canon VI – Obligations to the Environment).
On the other hand, AI’s ability to process data, identify patterns, and support analysis are often cited as a benefit in supporting the optimization of energy efficiency and disaster preparedness. For the architecture professional, balancing the climate risks and benefits for using AI are important for guiding adoption.
Tactic #5 - Assess Environmental Impact: Architects should educate themselves on understanding the visible and hidden environmental costs of using an AI system. AI offers a variety of uses and some are arguably more wasteful than others in regards to climate impact. An architect employing AI to support enhancing their sustainability-oriented services may be able to determine that the use ultimately justifies the cost.
Towards Ethical AI Adoption for Architecture
Artificial Intelligence is part of everyday life and has found a foothold in the architecture design process. Image generation, building code analysis, design narrative authoring, and even 3D model generation are among the use cases being implemented today. Even with extraordinary capabilities, AI’s impact on standards, trust, and climate pose serious questions for architects with regards to professional conduct and ethics.
Balancing AI’s transformative capabilities relative to the foundations of professional conduct will only grow in importance. Ethical AI guidelines and frameworks are being developed by a number of organizations, including UNESCO’s Ethics for Artificial Intelligence. As AI continues it’s adoption trend, professionals should strive to establish clear governance with tactics for harm minimization, transparency, sustainability, privacy, and security.
To recap:
- Tactic #1 – Exercise Responsible Control: Architects should be mindful of their standards of competence and responsible control and enact policies for meeting quality expectations for AI output.
- Tactic #2 – Evaluate and Critique Bias: Architects should implement design review and critique processes with consideration to how AI output, such as imagery, and does not reinforce harmful biases or stereotypes.
- Tactic #3 – Provide Transparency: Architects should carefully consider the evolving legal landscape regarding AI training, copyright, and fair use. Transparent disclosures and disclaimers of AI use and documenting how an AI output was transformed by human contribution are examples of ways to safeguard creative ownership of the design.
- Tactic #4 – Review Terms of Use: Architects should conduct careful review of AI terms of service with regards to maintaining trusted relationships and expectations for privacy, security, and confidentiality.
- Tactic #5 – Assess Environmental Impact: Architects should weigh their commitments to sustainability and environmental impact and carefully select uses for AI that will enable them to meet the challenges posed by climate change.
Are you developing a digital strategy? We can help…
- Learn more about Proving Ground’s Digital Transformation services.
