MWH Law Group logo

AI in the Workplace: Employer Considerations With Use of AI, ChatGPT

February 13, 2024
Megan Regennitter

Forms of Artificial Intelligence (“AI”) have become commonplace for employers in recent years, particularly in the hiring process. From screening resumes to scheduling interviews, opportunities to automate human resource-related procedures are increasing. With the expansion of ChatGPT, an AI language platform that interacts in conversation with a user, the use of AI as a tool for efficiency in the workplace will continue to rise.

The following are some considerations for employers to contemplate before implementing the use of AI, including ChatGPT, in the workplace.

  • Bias in Decision-Making: Any AI tool is dependent on the information input into its system, which becomes the basis for its decision-making. As humans with implicit bias input information into AI, the software itself can respond with bias. If used as a hiring tool, the risk of bias still exists without a “human” decision-maker. Final decision-making for all hiring decisions should require human review to help prevent discrimination. The EEOC released technical guidance in 2023 regarding adverse impact in the use of AI under Title VII.[1]
  • Data Privacy and Confidentiality: If employers permit the use of ChatGPT or similar AI software, the basic function of AI is to “learn” based on the information it receives. In a workplace, this means employees could be providing proprietary or confidential information, or even disclosing trade secrets. All employee confidentiality and non-disclosure agreements should include language to cover these circumstances, and training related to appropriate use of AI technology should expressly discuss appropriate use based on company-specific operations.
  • Intellectual Property: When a user receives a response from ChatGPT for use in the workplace, source vetting or reference checking remains necessary to ensure that all trademarked or copyrighted information is properly used by the employer.
  • Internal IT Security: Use of generative AI can increase the risk of security breaches, including phishing and fraudulent communications with employees. The need for additional IT training for employees and enhanced security for organizations will continue to rise as the use of AI increases.
  • FTC Unfair and Deceptive Practices: Entities that are governed by the Federal Trade Commission (“FTC”) and state law equivalents must consider whether their use of AI constitutes an “unfair and deceptive” trade practice if the use of AI is misrepresented or withheld from consumers. The FTC released guidance on this issue in 2020.[2] Generally, best practices require transparency.

Whether your company is considering the use of AI tools for the first time or is looking to expand AI implementation as the technology is refined, involve your attorney in the process to help assess risk and provide necessary updates to employment agreements, contracts, and more. Enhanced technology only provides cost savings and efficiency when paired with appropriate safeguards for employers.

[1] https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial

[2] https://www.ftc.gov/business-guidance/blog/2020/04/using-artificial-intelligence-algorithms

This article is a publication of MWH Law Group LLP and is intended to provide general information regarding legal issues and developments to our clients and other friends. It should not be construed as legal advice or a legal opinion on any specific facts or situation. For further information on your own situation, we encourage you to contact the author of the article or any other member of the firm.