The first wave of AI provisions is more impactful than it seems – AI literacy obligations and why your organisation should already be compliant 1w6i5k
“The AI genie is out of the bottle […]. It is affecting all aspects of society. It’s the machine under everything”. p394v


Cosmina Simion, Managing Partner WH Simion Partners
[1] This is the opinion of Cynthia Breazeal, MIT professor of media arts and sciences and, probably, one of the best
ways to summarize the present impact of AI on all aspects of life. Such consequences require a reaction from the legal system, and, for EU Member States, this reaction was embodied in the AI Regulation[2], finally adopted on July 12, 2024. However, the first set of provisions only come into force on February 2, 2025, and it refers to AI literacy obligations and the prohibited AI practices, thus only creating the basis for the waves of provisions to come this year.
The present context in the Romanian gambling space currently has on the table the subject of players’ self-exclusion options and the correlative obligations applying across the board for gambling operators by way of a national platform to manage the database provided for under Article 151 of GEO 77/2009 promised, but yet to be implemented by the ONJN. In this context, we believe worthwhile to briefly analyse and summarize what we find relevant in the digital space, respectively the use of AI tools – the what, the how, the what not to and the how not to.
For this reason, this article will provide a general outlook on the implications generated by the AI literacy obligations on three main areas of practice of legal professionals – privacy and data protection, intellectual property and, finally, gambling which will be used as a catch-all domain in order to outline more general consequences of the requirement to adopt AI literacy measures in businesses.
General outlook on AI literacy requirements in the AI regulation qi6b
Article 4 of the AI Regulation provides for the general regime on AI literacy measures. In more concrete , AI literacy measures can be described as the knowledge and skills that allow humans to understand, evaluate and use AI systems and tools safely and ethically[3]. The AI Regulation defines in Article 3(56) AI literacy as “skills, knowledge and understanding that allow providers, deployers and affected persons, taking into their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause”.
Besides the legal obligation imposed by the imminent entry into force on February 2, 2025, the implementation of AI literacy measures has already been identified as a necessity due to ever-growing use of all types of AI in corporate environments. For example, a study from February 2023 (before the Regulation was even adopted) shows that 68% of 5,067 respondents who used AI at work said they do not disclose usage to their bosses[4]. That is why a vertical commitment within the organization is preferable to an outright ban of AI software at work.
Such measures can be represented by initiatives such as continuous training on specific AI-related subjects, beginning to apply AI software to real-life situations of the organization (preferably lower risk problems at first), raising awareness about ethical consideration through internal policies or adopting a code of conduct on AI.

Artificial Inteligence
2. Specific implications of AI literacy obligations on different law branches 483b2i
Employees should be made aware about how their data is usually processed by an AI system that they use and, generally, about how AI systems can do profiling, how can they become the subject of automated decision making etc. There are also consequences for the employer who is seen as a controller under the GDPR – the privacy notice should now include references to how the company uses AI and how it will process data for this purpose. Also, internal documents such as regulations, policies should also take this into if employee data is also processed this way. Because employees are the main recipients of this provisions, their rights to information and to object are also impacted by the AI literacy obligation.
Although it is evident that no practice is crystallized at this point, we do not exclude the risk of national DPAs and AI national authorities under the AI Act to ask how the controller / processor (GDPR) or the deployer / provider (AI Regulation) has implemented in its business the AI literacy obligation, mainly in relation to the fairness and transparency principles under the GDPR. All of the above should be taken into as the authorities under both the GDPR and the AI act can and most likely will cooperate in order to audit business’ compliance from both points of view.
Similarly to privacy concerns, the impact of AI on intellectual property protection should be one of the discussed topics even if the business does not have a creative aim. Understanding licensing agreements is particularly important in order to properly evaluate what is the IP regime of outputs generated by AI. Even more so, AI literacy is important on IP because obligations that will enter into force in the future refer directly to deployers and how they must properly assess TDM-compliance of the AI systems they want to use. This refers to the Text and Data Mining exception provided by the DSM Directive[5], which, in short, allows website hosts to employ restrictions on their content being scraped through automatic means, scraping being one of the main ways through which training databases for AI are set up.
3. Gambling-specific implications 4iq2p
Obviously, the heaviest impact will be felt by creative departments like marketing, but also data-heavy departments will also need proper training – HR should be made aware of privacy-related concerns, of biases in training databases when judging candidates for a position. Same care should be addressed by customer-oriented departments. Despite these particular cases, all departments should benefit from general training on how AI’s work, risks and methods to ethically implement it in one’s workflow. In relation to gambling marketing particularly, training should outline the risks involved: a more personalized experience of players enhanced by AI can lead to issues related to addiction, thus focus should also be on mitigating measures such as deposit limits or time restrictions for players on losing streaks.
Employees should be made aware of how AIs work, mainly those already used, or which are planned to be used by the gambling company in areas such as marketing, fraud detection or finding responsible gaming measures. This training should prioritize a simple and clear language which should be the goal for all kinds of programs related to AI literacy. Moreover, AI literacy advice should be integrated with existing policies where possible in order to avoid clustering information. Similarly to privacy concerns, we cannot omit the risk of thematic controls employed by gambling national regulators, in collaboration with the AI national authority, in order to assess the role of AI in the organization, its ethical and regulatory implications etc.
Key takeaways 626414
Although there is no fine linked to the breach of Article 4 of the AI Regulation, authorities are likely to assess this obligation as part of their evaluation process regarding the implementation of other provisions in the Regulation. As we have seen, there is a need for both general training on how AI works, its opportunities and risks, but also on specific needs and limitations of the systems that are employed by the company[6]. In brief, AI literacy should not be an end goal, but an ongoing process, part of a much larger, coherent internal policy on the use of AI.

WH Simion & Partners
[1] Cynthia Breazeal in Alyson Klein, AI Literacy, Explained (EducationWeek, May 10, 2023) <https://www.edweek.org/technology/ai-literacy-explained/2023/05> accessed January 3, 2025.
[2] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[3] Lindauer S, ‘AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology’ (Digital Promise, 20 June 2024) <https://digitalpromise.org/2024/06/18/ai-literacy-a-framework-to-understand-evaluate-and-use-emerging-technology/#:~:text=AI%20literacy%20includes%20the%20knowledge,in%20an%20increasingly%20digital%20world> accessed 8 January 2025.
[4] Alex Christian, “The Employees Secretly Using AI at Work” (November 3, 2023) <https://www.bbc.com/worklife/article/20231017-the-employees-secretly-using-ai-at-work> accessed 3 January 2025.
[5] Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC.
[6] Erica Werneman Root, Nils Müller, Monica Mahay, “Understanding AI literacy” (January 15, 2025) IAPP <https://iapp.org/news/a/understanding-ai-literacy> accessed January 16, 2025.