The recent passing of the EU AI Act by the European Parliament marks a significant step towards regulating artificial intelligence (AI) technologies on a global scale. This groundbreaking legislation aims to safeguard against key risks associated with AI applications, such as the manipulation of human behavior, biometric categorization systems, and untargeted scraping of facial images for recognition databases. The new rules set forth in the AI Act provide a framework for identifying and mitigating risks posed by high-risk AI systems, particularly in critical infrastructures and essential public services.
The EU’s internal market committee co-rapporteur, Brando Benifei, emphasized the importance of this legislation in protecting the rights of workers and citizens, while also creating opportunities for innovation. The establishment of an AI Office will support companies in complying with the new regulations before they take effect. The AI Act is set to be fully applicable two years after its entry into force, with specific bans on prohibited practices coming into effect six months later. Codes of practice and general-purpose AI rules will be enforced in subsequent stages to ensure compliance and transparency.
In the United States, a report commissioned by the Department of State has raised concerns about the potential risks associated with advanced AI technologies, highlighting the need for governmental intervention to prevent catastrophic events, including human extinction. The report underscores the urgent need for policymakers to introduce safeguards that can balance the rapid progress of AI development with national security concerns. Major AI players like Google, OpenAI, and Microsoft have acknowledged the risks posed by advanced AI systems and the potential for weaponization and loss of control.
The report calls for a comprehensive action plan to stabilize AI development, including export controls, regulatory oversight, and the establishment of a new regulatory agency to oversee responsible AI use. These measures are essential to address the growing risks associated with the proliferation of advanced AI capabilities and the potential threats to global security. As such, the report emphasizes the critical need for the US government to intervene and implement regulatory frameworks to ensure the responsible development and deployment of AI technologies.
In Singapore, the Monetary Authority is exploring the use of data analytics and AI technologies to enhance economic forecasting models and address the challenges of predicting inflation post-pandemic. Economists are considering the potential benefits of AI advancements in driving monetary policies and improving economic forecasts. The collective failure of global economies to accurately predict inflation highlights the need for innovative approaches to economic modeling, including the integration of AI technologies to enhance predictive capabilities.