top of page
Writer's pictureRaimund Laqua

Should Using ChatGPT Result in Loss of License to Practice?


Should Using ChatGPT Result in Loss of License to Practice?
Should Using ChatGPT Result in Loss of License to Practice?

A recent incident involving a lawyer who relied on ChatGPT to prepare a court filing has raised questions about the reliability and accountability of using artificial intelligence tools in professional fields. The lawyer, Steven A. Schwartz, submitted a brief based on research conducted by ChatGPT, resulting in the inclusion of fabricated court cases. This incident has highlighted the limitations and risks associated with relying solely on AI-generated content. As a result, the discussion has emerged as to whether the use of ChatGPT should lead to the loss of an engineering license to practice.


While ChatGPT and other similar AI tools provide utility across various industries, including the legal profession, it is crucial to acknowledge their limitations. In the case of Steven A. Schwartz, ChatGPT generated false information by inventing court cases that did not exist. This incident not only raised questions about the accuracy of AI-generated content but also emphasized the need for human verification and critical analysis.


Professional Responsibility and Ethical Considerations


This incident involving ChatGPT has shed light on the importance of adhering to professional ethics and exercising due diligence when utilizing AI tools. While technology can enhance productivity and efficiency, professionals must remember that their expertise and judgment are paramount.


In the legal profession, submitting inaccurate or false information can have severe consequences. Courts and judges rely on the accuracy and integrity of the information presented to them. The use of AI tools should never substitute proper legal research and verification. The incident involving ChatGPT has prompted Judge Kevin Castel to set a hearing to determine potential sanctions against Steven A. Schwartz and the law firm, Levidow, Levidow & Oberman. Such consequences reflect the need for accountability when incorporating AI into professional practice.


Professionals, especially those in highly regulated fields like engineering, bear a significant responsibility to provide accurate and reliable information. Speculating on the potential outcomes of engineers relying on ChatGPT in critical infrastructure systems presents concerning scenarios. Inadequate verification or the unintentional introduction of false information by the AI tool could lead to design flaws, system vulnerabilities, or erroneous control commands.


Loss of License to Practice?


The question arises as to whether the use of ChatGPT or similar AI tools should result in the loss of an engineering license. While this specific incident raises concerns about the lawyer's reliance on AI-generated content, revoking an engineering license based solely on the use of ChatGPT may be an extreme measure. It is essential to consider the circumstances surrounding each case, including the intent and level of negligence involved.


Instead of automatic revocation, it might be more appropriate to develop guidelines and best practices for incorporating AI tools into professional practice. Professionals should receive adequate training and education on the ethical implications, limitations, and potential risks associated with AI tools. Licensing bodies can play a crucial role in setting standards and ensuring that professionals are well-equipped to navigate the challenges of integrating AI into their work.


What Should be Done?


While the incident involving ChatGPT and a lawyer highlights the risks of relying solely on AI-generated content in the legal profession, contemplating the use of ChatGPT by engineers and other professionals raises even greater concerns. Professionals must exercise caution, diligence, and critical thinking when incorporating these technologies into their work.


Revoking a professional license may be the right course of action when AI technologies are used out of ignorance, or otherwise when public safety is at risk. At the same time, it is crucial to emphasize professional responsibility, ethical considerations, and the need for comprehensive guidelines and training.


Responsible use of AI will require support from multiple levels:


  1. Governments need to establish effective legislation to regulate the use of AI where public safety may be at risk.

  2. Professional regulatory and licensing bodies need to establish appropriate code of conduct and practice guidelines with respect to the use of AI.

  3. Professionals need to make themselves aware of the risks associated with AI as it relates to their discipline and practice areas.

  4. Manufacturers need to self-regulate their behaviour by establishing responsible AI policy and practices.


64 views

Comments


bottom of page