AI headlines you might have missed (and questions you should be asking)

When navigating the rapidly evolving world of modern Artificial Intelligence systems, it is easy to be taken in by today’s fever pitch of hype. This includes wild manifestos made by AI company CEOs, white papers designed to stoke the fires of interest in their products, and new research exploring impacts on the human experience.

In the AEC industry, AI continues to dominate the discourse on digital technology implementations in architecture, engineering, and construction. While design and construction companies have been diligent in enacting AI pilots and procuring enterprise-wide implementations, many organizations are finding it difficult to realize results of their investment ‘at scale’. 

They are not alone. 

A recently published MIT study for 2025 has shown that 95% of AI pilots are failing in businesses. The study shows a disconnect between individualized perceived productivity gains and desired improvements to business-wide metrics such as profit and loss.

As businesses continue to explore their AI strategy, it is essential to ask hard questions about the impact your implementation is having. This blog article will illustrate that the adoption trajectory in 2026 is not as straightforward as the evangelists are claiming. I will discuss recent studies that show minimal impacts on productivity metrics and surveys that reveal that workers are experiencing new forms of ‘mental strain’ as a result of managing AI systems. Additionally, the legal context around AI is shifting in new ways with proposed guardrails around professional activities and developments around copyright. Finally, we can also observe that environmental concerns continue to escalate. 

As businesses continue to explore their AI strategy, it is essential to consistently ask hard questions about the impact your implementation is having.

AI might be increasing job complexity and mental fatigue. How do you keep your team sharp and engaged? 

Productivity is arguably the most enticing aspect of the AI sales pitch. Navigating complex tasks more efficiently and generating a higher quantity of output are appealing metrics for service-oriented businesses, like architecture, where schedules and fees are increasingly tight. Now that AI has found an adoption foothold in many businesses, there is now research that is attempting to measure the impact. A recent study of 25,000 workers across 7000 companies in Denmark, found that AI use was resulting in saving a worker an average of 2.8 hours per week. However, AI also introduced new tasks and also increased time spent on other tasks. The results illustrate a wash: no significant impact on earnings or time savings was recorded in this study.

Meanwhile, the Harvard Business Review recently published a piece on “AI Brain Fry”: a phrase used to describe increasing mental fatigue as a result of managing AI workflows. “This AI-associated mental strain,” write the authors, “carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.”

“This AI-associated mental strain,” write the authors, “carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.”Harvard Business Review

The study found that “the most mentally taxing form of AI engagement was oversight, or the extent to which the AI tools required the worker’s direct monitoring.” and while productivity would increase from using up to AI tools, adding 4 or more tools caused productivity to drop.

The lessons for an AI adoption strategy with this new information are numerous. Strategies should consider how a new AI capability might not simply replace a task, but might also introduce new unanticipated tasks. Furthermore, the studies point towards understanding how a collection of AI-powered tools might have impacts on the cognitive abilities of staff. Tactics for considering how tools might increase task switching or have low integration with existing tools might help to avoid the pitfalls of AI brain fry.

Summary

  • AI use replaces certain tasks but also creates new tasks.
  • A recent study illustrates a “wash” on productivity metrics.
  • AI uses requiring human oversight and coordination among multiple tools is producing acute mental fatigue.

Why is this important? AI adoption will require companies to balance unanticipated task creation with the potential for increased human fatigue, disengagement, and errors.

AI is erasing entry-level tasks and jobs. How do you build up the next generation of professionals?

One of the most pervasive dystopian fears around AI has been the replacement or elimination of jobs. A recent Stanford study illustrates “early large-scale evidence consistent with generative AI disproportionately impacting entry-level workers in the American labor market.” Meanwhile “employment trends for more experienced workers… have remained stable or continued to grow.”

“The real loss isn’t just a category of jobs. It’s the training layer inside organizations.” writes Samantha Walravens, in a recent Forbes piece. “Entry-level work has always done more than produce output; it’s where people learn what “good” looks like in practice.”

In my experience in developing digital strategies for AEC organizations, one of the most common gaps is in the investments in learning. This had been a pervasive struggle long before AI entered into the popular discourse. As schedules become tighter and budgets become leaner, it is often the investments in training, professional development, and knowledge management that fall by the wayside. If AI is to replace early-career ‘on the job’ learning opportunities, what mechanisms do businesses have to develop a new generation of decision makers?

“Entry-level work has always done more than produce output; it’s where people learn what “good” looks like in practice.”Forbes

In the case of becoming an architect, entry-level professionals are required to complete NCARB’s Architecture Experience Program (AXP) as a stepping stone towards their professional license. It’s interesting to review the AXP’s core competencies through the lens of potential AI impact: The largest subset of required experience (40%)  falls under the category of Project Development & Documentation which includes developing technical drawings and models and the coordination processes for resolving conflicts between systems. In an era of AI generation and automation, it valid to ask if this required technical experience will be available to future architects? And if not, what core competency replaces it?

In an era of AI generation and automation, it is valid to ask if this required technical experience will be available to future architects? And if not, what core competency replaces it?

Summary

  • AI is proving to be effective at automating or replacing tasks often associated with entry-level job responsibilities.
  • Reducing entry-level opportunities is removing an important ‘training layer’ inside of businesses for giving staff experience.

Why is this important? As a matter of long-term strategy, it is incumbent on businesses to consider the experiences they value in their professionals and work towards an adoption strategy that supports that experience. 

The legal context for AI’s professional capabilities are evolving. Should AI chatbots be held liable for professional advice?

As AI systems become more capable, the everyday user is coming to rely on them for on-demand guidance and advice. This includes seeking recommendations that would conventionally come from a professional like a doctor, lawyer, or architect. Today, you can visit any number of chatbots and prompt the system for medical advice or design guidance. You can even ask many of them to ‘role play’ as a professional (like an architect) and direct them to provide responses in an ‘architect’s voice’. Many of these systems have Terms of Service and License Agreements designed to shield the AI company from damages that might result from bad advice or the chatbot operating out of an acceptable scope.

You can visit any number of chatbots and prompt the system for medical advice or design guidance. You can even ask many of them to ‘role play’ as a professional (like an architect) and direct them to provide responses in an architect’s voice’.

New York is now considering new legislation that would hold AI companies liable if their AI chat interfaces provide advice normally given by expert human professionals such as doctors, lawyers, engineers, and architects – especially if that advice leads to harmful outcomes for the user. On the one hand, people can use search engines and popular resources to look up information about professional matters. Where AI chat interfaces become problematic is in their ability to impersonate professionals through an interactive chat interface and their tendency to claim capabilities that they do not possess and mislead the user.

If legislation like this is successful, it will likely lead to many of these systems towards having additional guardrails and disclaimers for everyday users. It could also lead to a subset of more specialized systems being made available for professionals who ultimately hold the liability risks associated with their area of expertise.

Summary

  • AI tools are often capable at providing access to professional knowledge and advice – even impersonating a professional or asserting it has capabilities that are not in the product’s scope.
  • AI providers disclaim uses, limitations, and errors in their terms of service.
  • New York lawmakers are considering legislation that will hold AI companies liable for damages and harm if AI tools, like chatbots, impersonate professionals and provide advice to users.

Why is this important? Professional knowledge carries serious responsibilities and liabilities requiring license and certification. AI tools often disclaim their responsibility for providing accurate output or producing damaging outcomes. New proposed regulations will produce additional guardrails on the technology and hold AI creators accountable for their products.

Generative AI output is not eligible for US copyright. How do you meet the threshold of human originality? 

Another legal consideration for the use of AI is in how it impacts your ability to assert copyright of the work being produced with it. The U.S. Supreme Court recently declined to hear a case involving a computer scientist who was denied a copyright for visual art created with AI. This leaves lower court rulings in place which upheld the U.S. Copyright Office’s rule that AI-generated content is not eligible for copyright because it does not meet a human authorship requirement.

In addition to the question of if copyright could be applied to AI outputs, additional reports from the U.S. Copyright Office have addressed other concerns regarding AI training with copyrighted works: “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use.”

As providers of creative solutions, architects risk putting themselves in a position where the tools they are utilizing would disallow them to claim ownership of the work product.

This evolving legal context should give creative businesses pause in how they calibrate their adoption of AI technology for creative endeavors. As providers of creative solutions, architects risk putting themselves in a position where the tools they are utilizing would disallow them to claim ownership of the work product. As creative businesses use more AI tools, they may find the need to formally document and defend how their work utilized sufficient human influence and creativity in the design process.

Summary

  • U.S. Copyright policy prevents users from claiming copyright over AI generated output.
  • Commercial use of systems that use copyrighted works for training and generation goes beyond established fair use.

Why is this important? Creative professionals, like architects, will need to consider how AI uses might risk their ability to claim ownership over work products that use AI.

The environmental impact is becoming more pronounced. How will you align AI with your sustainability approach?

The environmental impact of AI has been a long-standing concern of critics of the technology. The impact is largely driven by the energy consumption, water usage, and carbon emissions associated with the AI data centers used for training AI models with large quantities of data. In 2025, researchers found that AI produced the same amount of carbon emissions as the entire city of New York and its water use exceeded the entirety of global bottled-water demand. In 2026, it is projected that construction spending on data centers will reach $128 billion.

Among these new data centers, Wired recently reviewed construction permits that showed 11 data center campuses powered by natural gas. According to this report, these US-based centers “have the potential to create more greenhouse gases than the country of Morocco emitted in 2024” and equate to “more than 129 million tons of greenhouse gases per year.”

Can an architectural design process that is supported by an unsustainable technology be used as a means to pursue sustainable building strategies?

As I’ve written about previously, the use of AI by professional architects presents an ethical environmental challenge: can an architectural design process that is supported by an unsustainable technology be used as a means to pursue sustainable building strategies? The answer here is not an easy one as AI technology could be theoretically employed to support the determination of better building decisions that could justify its initial environmental cost. It is also feasible that AI implementations could become more sustainable as the strategies for training and running the models become more mature.

This research paper proposes that strategies for optimizing algorithms and hardware in conjunction with the employment of smarter building systems for data centers can dramatically reduce the environmental impact of AI. Additionally, they propose looking towards specific uses of AI and Machine: “Rather than use the biggest multi-purpose model available, efforts should be invested in developing reduced architectures that are capable of tackling the problem in hand.”

Summary

  • In 2025, AI produced the same amount of carbon emissions as New York City and a fresh water consumption equivalent to the global bottled water industry.
  • Algorithm, hardware optimization, and sustainable building strategies for data centers could produce long-term benefits for realizing a more sustainable AI infrastructure.

Why is this important? For sustainably conscious designers and builders, the AI environmental impact is significant and is likely to shape design processes. Professionals should be evaluating uses that align with business ethics and targeted strategies that yield specific benefits.


How can we help?

Proving Ground specializes in helping AEC organizations navigate the complexity of digital transformations including the impacts on people, processes, and technology. We work with our clients to explore new digital trends and evaluate their potential impacts on businesses and projects. If you are looking for unbiased, well informed, human-first guidance, contact us!