AI is not only a technological revolution but also a difficult problem in management. Major technology corporations are racing to develop strong AI, bringing breakthrough applications in many fields from healthcare, education to finance. However, this explosion is accompanied by concerns about security, privacy, impact on jobs and even the ability of AI to get beyond human control.
Many countries are struggling to find effective ways to manage AI. Europe has passed the AI Act with strict regulations to protect consumers and control risks. Meanwhile, Brazil has also made efforts to issue comprehensive AI laws but has faced opposition from large technology corporations.
In contrast, countries such as India and South Africa have proactively attracted AI investment without setting strict regulations, with the view that mangement after development. This raises an important question: Should AI be developed first and then have appropriate regulations, or do there need to be clear rules from the beginning to avoid negative consequences? The answer is not simple, because AI is not only a technological problem but also related to economic, political and social benefits.
Development speed causes difficulties
The recent AI explosion has made it difficult for governments to keep up with the pace of development and issue appropriate management regulations. Major technology corporations such as Google, Microsoft, OpenAI and Meta are not only investing heavily in AI but also proactively shaping the game by mobilizing the corridor to reduce legal constraints that may affect their business operations.
Brazil is a typical example. The government has previously proposed a comprehensive AI bill with many strict control provisions, including the establishment of an AI supervisory agency, protecting training data copyright, banning automatic weapons and controlling social media algorithms to limit fake news. However, the bill was met with strong opposition from major technology corporations, causing it to be revised several times before being passed with a "light" version.
Not only in Brazil, but in many other regions, large technology companies are also trying to influence the process of building AI policies. In Europe, Meta, OpenAI and Amazon have campaigned to reduce the provisions in the AI act. In Canada, Microsoft and Amazon publicly criticized the country's AI bill for being too vague and hindering innovation.
Meanwhile, in many developing countries, large technology corporations are welcomed as strategic partners. India is a typical example when the government of this country focuses on attracting AI investment without focusing on management. Companies such as Google, Meta, Microsoft and OpenAI have committed to investing heavily in India, making it one of the fastest-growing AI hubs in the world.
This shows a reality: The pace of AI development is so fast that governments cannot make timely regulations to control. While the West is trying to establish a strict legal framework to control AI, many other countries accept the risk to promote economic growth.
Pre-development or pre-management?
Should we let AI develop freely first and then manage it, or need to control it from the beginning to avoid unpredictable consequences? This is an issue that many governments and large technology corporations are facing.
Some countries have chosen to control it from the start. The European Union is a leading region in promulgating AI laws with the AI Act - the first legal code in the world that specifically regulates the development and application of AI. This law prohibits AI from being used for purposes such as social classification, limiting AI in the judicial field, and requiring AI-generated content to be clearly labeled. However, management too early can also slow down the innovation process.
In Brazil, after the government proposed a tight AI bill, many experts and businesses have been concerned that this could reduce the competitiveness of the country's technology industry. Under that pressure, the Brazilian government had to loosen some provisions in AI law to avoid affecting the development of the industry.
On the contrary, there are countries that choose to develop AI first and then manage it. India is a typical example when the country focuses on attracting AI investment without issuing strict regulations. The Indian government believes that creating a strong AI ecosystem first will help the country have a better position in the future, instead of being held back by legal regulations from the beginning.
South Africa is also following a similar path by encouraging large technology companies to invest in AI without setting strict regulations. The government of this country considers AI the key to promoting economic growth, but at the same time recognizes that the lack of regulations can lead to unwanted consequences.
So which approach is the right one? There is no single answer to this problem. Overly strict AI management can reduce competitiveness and innovation, while allowing AI to develop freely without regulation can lead to serious risks. The important thing is to find a reasonable balance between innovation and control, between economic benefits and the safety of society.
Some experts believe that instead of issuing rigid AI laws from the beginning, countries should apply a more flexible approach, accordingly, the regulations will be gradually adjusted to suit the development of technology. This will help avoid legal backwardness but still ensure that AI does not develop uncontrollably.
Finally, AI is not only a technological problem but also a political, economic and social problem. AI management cannot rely solely on rigid regulations, but requires cooperation between governments, businesses and social organizations to ensure AI is developed responsibly, safely and brings real benefits to people.