The surge in interest surrounding the implementation of artificial intelligence (AI) solutions to enhance efficiency and gain a competitive edge is the latest trend. However, concerns about legal compliance, particularly in light of pending EU regulations addressing AI, are of utmost importance.
Artificial Intelligence Regulatory Landscape
Currently, there exists no universally applicable legislation imposing specific obligations related to AI. However, this is expected to change in the near future. The finalization of the core act, known as the “Artificial Intelligence Act,” which establishes standardized rules on AI, is scheduled for later this year. The Commission, Parliament, and Council are currently engaged in negotiations to determine the final wording of this act. It will primarily impose obligations on both AI system providers and entities using AI systems under their control.
The Artificial Intelligence Act aims to establish a legal framework for the development and deployment of AI systems within the EU. Its primary objective is to ensure that AI technologies are employed in a manner that is transparent, accountable, and respects fundamental rights and values.
Common Concerns about Artificial Intelligence
In addition to regulations specific to AI, it is crucial to analyse AI usage within the framework of existing legislation. Frequently raised questions include:
- Determining ownership rights over AI-generated outputs (completions) and establishing usage protocols, including the consequences of integrating such outputs with the client’s proprietary solutions.
- Allocating liability for intellectual property infringements resulting from the use of AI solutions and completions/materials generated by generative AI (e.g., identifying the entity responsible for copyright claims when third-party materials were used in training models).
- Addressing potential access by the AI system provider to data inputted into the model, particularly during content analysis and filtering to ensure proper usage.
- Utilising client data for further training of the provider’s models.
- Ensuring compliance with GDPR, especially in terms of upholding data subject rights and implementing requirements related to automated data processing (including profiling), as well as addressing issues of inaccurate personal data generated by AI solutions.
Solutions to these concerns can primarily be found within the contract with the AI system provider and technical documentation detailing data flow or service configuration options.
Furthermore, evaluating necessary adjustments within the client’s organisational structure is crucial to ensure lawful AI usage and mitigate solution-specific risks (e.g., over-reliance on AI systems or potential misinterpretations by AI solutions). These efforts often involve formulating appropriate usage policies for AI, updating data protection documentation, and implementing protocols for human oversight of AI-generated content.
When identifying legal risks and their solutions, it is worth remembering that there are considerable differences not only between different versions of AI solutions but – most importantly – between AI service providers, especially regarding the ways in which they regulate the above issues in their contracts or in the architecture of their services. The situation in this area is often very dynamic – for example, recently Microsoft published the Microsoft Copilot Copyright Commitment which states that, starting October 1st, Microsoft will extend existing contractual liability rules for intellectual property infringement with regard to commercial Copilot services and Bing Chat Enterprise. As a result of the above, Microsoft will defend the customer and pay any amounts awarded in adverse judgments/settlements in the event that the client is sued by a third party for infringement of intellectual property rights through the use of Copilot services or the generated responses (excluding trademarks). To benefit from the above, it is necessary to use the protections and content filters built into the services by Microsoft and not to use the services intentionally to create infringing materials. The obligation to defend against claims related to the use of AI-generated content by AI systems is undoubtedly an important change in the approach to the client, and may facilitate any decision regarding using AI.
Implementing AI is already possible Despite many valid points regarding the risks of using AI, what is common for new technologies, it should not be assumed, without further analysis, that implementing such systems in an organization is currently not possible, particularly given the still-ongoing work on the AI Act. The regulations which are in force in Poland do not generally prohibit the use of such solutions. However, it is important to approach this topic thoroughly, including by properly defining the rights and obligations of the user and the AI solution provider, defining the ways in which AI solutions can be used in the organization as well as adjusting internal procedures. Many entities are already using this technology in their daily work, showing many interesting applications of AI (e.g. efficient document review, performing summaries and analysis of large amounts of text) and how many further benefits it can bring.