top of page

Current AI policies, legislation, regulations, and more in the EU vs the US


AI policies, legislation, regulations, and more in the EU vs the US


These days it does not matter if your startup is directly involved in AI or not. AI is here to stay, and it is not a matter if this technology will impact your business but how. If you feel flummoxed by the state of AI legislation, regulation, and standards – especially differences in AI legislation in EU vs. the US – there is a reason for that. As an emergent technology, the jury is still out on how to best manage all the uses of AI technology. 


With so much confusion, how can you even begin to think about drafting an AI policy to protect your startup and ensure that this technology is used ethically and within the confines of any existing laws? 


Here we will outline how the EU and the US governments have been handling AI, the considerations that need to be addressed within an internal AI policy, and how your Fractional COO can help.


Laws, regulations, and standards: what’s the difference?


Before delving into the differences between what’s happening on either side of the Atlantic, it’s a good idea to clarify the differences between laws, regulations, and standards


Laws: Developed by legislative bodies (think: US Congress, EU Parliament). Obviously, anything that is included in a law is legally binding within the jurisdiction of the lawmaking body which passed it. 


Regulations: Rules, often sector-specific, based on the interpretation of the law that are made by government agencies.


Standards: Best practices documents that can be developed and published by governmental or non-governmental organizations. A standard is not legally binding unless adherence to it is stipulated within a legal contract or it is included by reference into a law. 


As a startup founder, you should familiarize yourself with all of the above relevant to your industry, and especially those which intersect with AI, prior to developing any AI policy for your organization.


Don't neglect an AI policy, seen here.


The US: A decentralized approach to AI


The US has taken a highly decentralized approach to regulating AI. Legislation at the federal level is relatively nascent, and only a few states have passed laws related to AI. Federal government agencies are still in the process of appointing Chief AI Officers. A number of federal agencies have begun issuing draft reports and working on standards


If you’re thinking about implementing your own AI policy, think about the regulations that rule in your sector. What should be in an AI policy depends on what regulations and standards are in your industry, which is why having an AI operations expert in your business could be of tremendous help. They can help you figure out how to navigate the rapidly changing world of AI regulation.


What this means is that depending upon which sector you are involved in, and especially if you plan to seek grants or other funding with the US federal government or serve as a government contractor, you will need to do your homework. 


Luckily, many government resources, such as the Federal Register, make it easy to subscribe to updates related to AI legislation, rulemaking, and calls for public input at the federal level. Similar resources also exist at the state and local government levels. 


Thinking of expanding into the US market in the new age of AI? We have a resource for you, a free PDF: The Essential Tech Startup Guide for Expanding into the US Market using AI. This resource gives you even more insight on AI legislation in EU vs. the US. Click the link to get your free copy!


What is the EU approach to regulating AI? Horizontal and vertical integration


Europe, in contrast, has taken a very centralized approach to AI. In January 2024 the European AI Office opened its doors in January 2024 to coordinate AI development and protections across all 27 states.  Shortly thereafter, the European Parliament made history by passing the world’s first piece of legislation on artificial intelligence on February 2, 2024. The new EU law for AI is outlined in the EU AI Act and it delineates AI activities by level of risk, and not necessarily by sector. 


Risk categories include:


  • Unacceptable risk: AI systems which are considered a threat to people. These are completely banned, with the exception of certain uses of biometric technology by law enforcement. 


  • High risk: AI systems that can interfere with personal safety or fundamental rights. These systems will be assessed prior to being made available on the market and iteratively throughout the product lifecycle. High risk systems include, but are not limited to, consumer electronics and motorized vehicles as well as systems used in education and employment.


  • General purpose and generative AI: These technologies are required to disclose that the content created by them was generated by AI. Also, developers of generative AI must engineer their systems to prevent the generation of illegal content and summaries of copyrighted date used for training these systems must also be published. 


  • Limited risk: Low risk AI technologies that nevertheless are subject to transparency requirements.  Users of these systems must be informed that they are using AI technology and that they may cease using the technology at their own discretion. 


AI policy insights, specifically for startups


A thematic, sector specific treatment of AI technology by the EU is found in the GenAI4Eu initiative, which was part of the same AI innovation package that led to the creation of the EU AI Office. 


The expressed purpose of the GenAI4EU program is to “support startups and SMEs in developing trustworthy AI that complies with EU values and rules.” The industrial sectors targeted by the GenAI4EU program include robotics, health, biotech, manufacturing, mobility, climate and virtual worlds. 


Don't neglect AI policies, a draft of which seen in this image.



Internationally Developed Standards: more insights on AI legislation in EU vs. US


Additionally, the International Organization for Standardization (ISO) based in Geneva, Switzerland has published three international standards on AI


  • ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system

  • ISO/IEC 23894:2023 Information technology — Artificial intelligence — Guidance on risk management

  • ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)


Because they have been developed with the consensus of international experts (including the US and EU member states), policymakers often lean on these types of standards when drafting legislation. Other international standardization bodies which are currently producing AI standards include the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE)


As AI begins to penetrate more industries, it is best to watch out for new or revised standards to be released that will either incorporate existing AI standards by reference or which will be highly specific for particular use cases. Additionally, keep in mind that in the future, it may be necessary to have your business’s use of AI audited in the way ESG audits are conducted against international standards. 


Developing an AI Policy for Your Billion-Dollar Startup: The Role of Strategic Board Leadership


As your tech startup reaches the $1 billion USD in annual recurring revenue (ARR) milestone, the need for a comprehensive AI policy becomes critical. The content and structure of this policy will depend on several key factors:


  • Whether your company is developing AI-driven products or services, or simply utilizing AI for routine business operations (e.g., customer support via chatbots)

  • The current and anticipated future uses of AI within your organization

  • The associated risks with these AI applications

  • The jurisdictions where your company is legally incorporated and operates

  • Future market expansions and their regulatory implications


Before drafting your AI policy, it’s crucial to have a clear understanding of your business processes and the relevant AI legislation across different regions, such as the EU and the US.



As a seasoned board director, I can guide your company through this complex landscape, ensuring that your AI policy is robust, compliant, and tailored to your business needs. With my strategic oversight, we’ll work together to assess AI-related risks, taking into account your company’s specific operations, tools, and potential pain points, such as high costs or communication challenges.


In collaboration with your legal counsel, I can help craft an AI policy that not only protects your company from litigation but also positions you as a leader in ethical AI usage. This policy will educate your staff, demonstrate due diligence to your clients, and ensure your operations align with international AI regulations.


Operating a billion-dollar startup without a clear AI policy exposes your business to unnecessary risks. By adding experienced board leadership, you can navigate these challenges confidently.


If you're asking, "How do I draft an AI policy?" I'm here to provide the strategic guidance you need. Let’s discuss how my board membership can help your organization develop a comprehensive AI strategy that safeguards your future while driving growth.


Connect with me today to ensure your company is prepared for the evolving AI regulatory landscape.



 

This blog post can also be found on Bhuva Shakti’s LinkedIn newsletter “The BIG Bulletin.” Both the BIG Bulletin on LinkedIn and the BIG Blog are managed by Bhuva’s Impact Global. We encourage readers to visit Bhuva’s LinkedIn page for more insightful articles, posts, and resources.



Comments


bottom of page