Tag: artificial intelligence

Tag: artificial intelligence

Artificial Intelligence and the Law

The surge in interest surrounding the implementation of artificial intelligence (AI) solutions to enhance efficiency and gain a competitive edge is the latest trend. However, concerns about legal compliance, particularly in light of pending EU regulations addressing AI, are of utmost importance.

Artificial Intelligence Regulatory Landscape

Currently, there exists no universally applicable legislation imposing specific obligations related to AI. However, this is expected to change in the near future. The finalization of the core act, known as the “Artificial Intelligence Act,” which establishes standardized rules on AI, is scheduled for later this year. The Commission, Parliament, and Council are currently engaged in negotiations to determine the final wording of this act. It will primarily impose obligations on both AI system providers and entities using AI systems under their control.

The Artificial Intelligence Act aims to establish a legal framework for the development and deployment of AI systems within the EU. Its primary objective is to ensure that AI technologies are employed in a manner that is transparent, accountable, and respects fundamental rights and values.

Common Concerns about Artificial Intelligence

In addition to regulations specific to AI, it is crucial to analyse AI usage within the framework of existing legislation. Frequently raised questions include:

  • Determining ownership rights over AI-generated outputs (completions) and establishing usage protocols, including the consequences of integrating such outputs with the client’s proprietary solutions.
  • Allocating liability for intellectual property infringements resulting from the use of AI solutions and completions/materials generated by generative AI (e.g., identifying the entity responsible for copyright claims when third-party materials were used in training models).
  • Addressing potential access by the AI system provider to data inputted into the model, particularly during content analysis and filtering to ensure proper usage.
  • Utilising client data for further training of the provider’s models.
  • Ensuring compliance with GDPR, especially in terms of upholding data subject rights and implementing requirements related to automated data processing (including profiling), as well as addressing issues of inaccurate personal data generated by AI solutions.

Solutions to these concerns can primarily be found within the contract with the AI system provider and technical documentation detailing data flow or service configuration options.

Furthermore, evaluating necessary adjustments within the client’s organisational structure is crucial to ensure lawful AI usage and mitigate solution-specific risks (e.g., over-reliance on AI systems or potential misinterpretations by AI solutions). These efforts often involve formulating appropriate usage policies for AI, updating data protection documentation, and implementing protocols for human oversight of AI-generated content.

When identifying legal risks and their solutions, it is worth remembering that there are considerable differences not only between different versions of AI solutions but – most importantly – between AI service providers, especially regarding the ways in which they regulate the above issues in their contracts or in the architecture of their services. The situation in this area is often very dynamic – for example, recently Microsoft published the Microsoft Copilot Copyright Commitment which states that, starting October 1st, Microsoft will extend existing contractual liability rules for intellectual property infringement with regard to commercial Copilot services and Bing Chat Enterprise. As a result of the above, Microsoft will defend the customer and pay any amounts awarded in adverse judgments/settlements in the event that the client is sued by a third party for infringement of intellectual property rights through the use of Copilot services or the generated responses (excluding trademarks). To benefit from the above, it is necessary to use the protections and content filters built into the services by Microsoft and not to use the services intentionally to create infringing materials. The obligation to defend against claims related to the use of AI-generated content by AI systems is undoubtedly an important change in the approach to the client, and may facilitate any decision regarding using AI.

Implementing AI is already possible Despite many valid points regarding the risks of using AI, what is common for new technologies, it should not be assumed, without further analysis, that implementing such systems in an organization is currently not possible, particularly given the still-ongoing work on the AI Act. The regulations which are in force in Poland do not generally prohibit the use of such solutions. However, it is important to approach this topic thoroughly, including by properly defining the rights and obligations of the user and the AI solution provider, defining the ways in which AI solutions can be used in the organization as well as adjusting internal procedures. Many entities are already using this technology in their daily work, showing many interesting applications of AI (e.g. efficient document review, performing summaries and analysis of large amounts of text) and how many further benefits it can bring.

Financial fraud and artificial intelligence.

Fraud often involves impersonation, and artificial intelligence (AI) has emerged as a potent tool that can effectively mimic human behaviour. This means that fraudsters can leverage AI to enhance the credibility, efficiency, and speed of their attacks.

AI possesses the potential to be employed for financial fraud, similar to any other technology. It can be utilised to impersonate individuals, automate phishing attacks, and manipulate data. To safeguard against the exploitation of AI for fraudulent purposes and mitigate potential risks, robust security measures must be implemented. Moreover, it is crucial to establish ethical guidelines governing the use of AI to prevent any misuse or abuse of this technology.

AI’s Role in Fraudulent Activities:

At the consumer level, AI can generate scripts that fraudsters use to deceive people over the phone and coax them into making unauthorised bank transfers.

On a larger scale, particularly concerning financial institutions such as banks and lenders, generative AI has the capability to create fabricated videos and photographs of non-existent individuals. In other words, AI can provide deceptive “evidence” to pass identity checks, enabling the opening of fraudulent accounts, execution of unauthorised transfers, and even the creation of (fake) assets or liquidity against which loans can be secured.

Recommended Actions:

The potential for AI to be employed in financial fraud cannot be overstated, especially considering the accessibility of powerful AI models like ChatGPT that anyone can utilise anonymously. Firms that may be at risk should take the following steps:

  • Scrutinise the authenticity of all identification documentation provided for anti-money laundering (AML) and know your customer (KYC) purposes. If possible, seek information from reputable third parties such as public registries or verification firms instead of relying solely on direct sources. In case of doubts, consult an in-house or external cybersecurity team.
  • Implement measures to ensure that existing clients and customers are not being “spoofed” or impersonated. This may involve employing multi-factor authentication and, in some cases, conducting face-to-face meetings to validate identities.
  • Train vulnerable staff members to recognise patterns indicative of financial fraud. Although the methods employed by fraudsters may have evolved with the use of AI, their underlying goals remain the same. Any unexplained or out-of-character transactions or borrowing without an apparent purpose should be regarded with suspicion, regardless of how convincing the supporting documentation appears.

Regulation of AI is inevitably on the horizon. In March of this year, the UK government published a white paper outlining its proposed “pro-innovation” approach to AI. However, AI is already pervasive, and the white paper itself acknowledges that “the pace of change itself can be unsettling.” In the interim, self-help is the best and only defence.

Disclaimer: This publication provides a general summary of the law and should not substitute for tailored legal advice based on your specific circumstances.